00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3695 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3296 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.050 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.051 The recommended git tool is: git 00:00:00.051 using credential 00000000-0000-0000-0000-000000000002 00:00:00.054 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.077 Fetching changes from the remote Git repository 00:00:00.079 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.116 Using shallow fetch with depth 1 00:00:00.116 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.116 > git --version # timeout=10 00:00:00.151 > git --version # 'git version 2.39.2' 00:00:00.151 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.188 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.188 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.653 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.663 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.674 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:04.674 > git config core.sparsecheckout # timeout=10 00:00:04.684 > git read-tree -mu HEAD # timeout=10 00:00:04.696 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:04.716 Commit message: "packer: Add bios builder" 00:00:04.716 > git rev-list --no-walk bb4bbb76f2437bc8cff7e7e4a466bce7165cd7f0 # timeout=10 00:00:04.809 [Pipeline] Start of Pipeline 00:00:04.820 [Pipeline] library 00:00:04.822 Loading library shm_lib@master 00:00:04.822 Library shm_lib@master is cached. Copying from home. 00:00:04.834 [Pipeline] node 00:00:04.842 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.843 [Pipeline] { 00:00:04.853 [Pipeline] catchError 00:00:04.854 [Pipeline] { 00:00:04.865 [Pipeline] wrap 00:00:04.873 [Pipeline] { 00:00:04.880 [Pipeline] stage 00:00:04.882 [Pipeline] { (Prologue) 00:00:05.041 [Pipeline] sh 00:00:05.322 + logger -p user.info -t JENKINS-CI 00:00:05.342 [Pipeline] echo 00:00:05.344 Node: GP11 00:00:05.351 [Pipeline] sh 00:00:05.646 [Pipeline] setCustomBuildProperty 00:00:05.654 [Pipeline] echo 00:00:05.655 Cleanup processes 00:00:05.659 [Pipeline] sh 00:00:05.939 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.939 1147064 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.950 [Pipeline] sh 00:00:06.229 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.229 ++ grep -v 'sudo pgrep' 00:00:06.229 ++ awk '{print $1}' 00:00:06.229 + sudo kill -9 00:00:06.229 + true 00:00:06.241 [Pipeline] cleanWs 00:00:06.250 [WS-CLEANUP] Deleting project workspace... 00:00:06.250 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.256 [WS-CLEANUP] done 00:00:06.259 [Pipeline] setCustomBuildProperty 00:00:06.272 [Pipeline] sh 00:00:06.551 + sudo git config --global --replace-all safe.directory '*' 00:00:06.619 [Pipeline] httpRequest 00:00:06.637 [Pipeline] echo 00:00:06.639 Sorcerer 10.211.164.101 is alive 00:00:06.645 [Pipeline] httpRequest 00:00:06.649 HttpMethod: GET 00:00:06.650 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:06.650 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:06.668 Response Code: HTTP/1.1 200 OK 00:00:06.669 Success: Status code 200 is in the accepted range: 200,404 00:00:06.669 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:09.921 [Pipeline] sh 00:00:10.211 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:10.227 [Pipeline] httpRequest 00:00:10.258 [Pipeline] echo 00:00:10.260 Sorcerer 10.211.164.101 is alive 00:00:10.268 [Pipeline] httpRequest 00:00:10.273 HttpMethod: GET 00:00:10.274 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:10.274 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:10.289 Response Code: HTTP/1.1 200 OK 00:00:10.290 Success: Status code 200 is in the accepted range: 200,404 00:00:10.290 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:03.428 [Pipeline] sh 00:01:03.708 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:07.003 [Pipeline] sh 00:01:07.286 + git -C spdk log --oneline -n5 00:01:07.286 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:07.286 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:07.286 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:07.286 d005e023b raid: fix empty slot not updated in sb after resize 00:01:07.286 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:01:07.311 [Pipeline] withCredentials 00:01:07.322 > git --version # timeout=10 00:01:07.332 > git --version # 'git version 2.39.2' 00:01:07.349 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:07.352 [Pipeline] { 00:01:07.360 [Pipeline] retry 00:01:07.362 [Pipeline] { 00:01:07.386 [Pipeline] sh 00:01:07.671 + git ls-remote http://dpdk.org/git/dpdk main 00:01:10.981 [Pipeline] } 00:01:11.003 [Pipeline] // retry 00:01:11.008 [Pipeline] } 00:01:11.029 [Pipeline] // withCredentials 00:01:11.039 [Pipeline] httpRequest 00:01:11.057 [Pipeline] echo 00:01:11.059 Sorcerer 10.211.164.101 is alive 00:01:11.067 [Pipeline] httpRequest 00:01:11.072 HttpMethod: GET 00:01:11.073 URL: http://10.211.164.101/packages/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:11.073 Sending request to url: http://10.211.164.101/packages/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:11.076 Response Code: HTTP/1.1 200 OK 00:01:11.077 Success: Status code 200 is in the accepted range: 200,404 00:01:11.077 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:17.249 [Pipeline] sh 00:01:17.583 + tar --no-same-owner -xf dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:18.972 [Pipeline] sh 00:01:19.257 + git -C dpdk log --oneline -n5 00:01:19.258 82c47f005b version: 24.07-rc3 00:01:19.258 d9d1be537e doc: remove reference to mbuf pkt field 00:01:19.258 52c7393a03 doc: set required MinGW version in Windows guide 00:01:19.258 92439dc9ac dts: improve starting and stopping interactive shells 00:01:19.258 2b648cd4e4 dts: add context manager for interactive shells 00:01:19.269 [Pipeline] } 00:01:19.286 [Pipeline] // stage 00:01:19.296 [Pipeline] stage 00:01:19.299 [Pipeline] { (Prepare) 00:01:19.321 [Pipeline] writeFile 00:01:19.339 [Pipeline] sh 00:01:19.674 + logger -p user.info -t JENKINS-CI 00:01:19.686 [Pipeline] sh 00:01:19.970 + logger -p user.info -t JENKINS-CI 00:01:19.983 [Pipeline] sh 00:01:20.266 + cat autorun-spdk.conf 00:01:20.266 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.266 SPDK_TEST_NVMF=1 00:01:20.266 SPDK_TEST_NVME_CLI=1 00:01:20.266 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.266 SPDK_TEST_NVMF_NICS=e810 00:01:20.266 SPDK_TEST_VFIOUSER=1 00:01:20.266 SPDK_RUN_UBSAN=1 00:01:20.266 NET_TYPE=phy 00:01:20.266 SPDK_TEST_NATIVE_DPDK=main 00:01:20.266 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:20.273 RUN_NIGHTLY=1 00:01:20.277 [Pipeline] readFile 00:01:20.300 [Pipeline] withEnv 00:01:20.302 [Pipeline] { 00:01:20.316 [Pipeline] sh 00:01:20.600 + set -ex 00:01:20.600 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:20.600 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:20.600 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.600 ++ SPDK_TEST_NVMF=1 00:01:20.600 ++ SPDK_TEST_NVME_CLI=1 00:01:20.600 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.600 ++ SPDK_TEST_NVMF_NICS=e810 00:01:20.600 ++ SPDK_TEST_VFIOUSER=1 00:01:20.600 ++ SPDK_RUN_UBSAN=1 00:01:20.600 ++ NET_TYPE=phy 00:01:20.600 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:20.600 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:20.600 ++ RUN_NIGHTLY=1 00:01:20.600 + case $SPDK_TEST_NVMF_NICS in 00:01:20.600 + DRIVERS=ice 00:01:20.600 + [[ tcp == \r\d\m\a ]] 00:01:20.600 + [[ -n ice ]] 00:01:20.600 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:20.600 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:20.600 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:20.600 rmmod: ERROR: Module irdma is not currently loaded 00:01:20.600 rmmod: ERROR: Module i40iw is not currently loaded 00:01:20.600 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:20.600 + true 00:01:20.600 + for D in $DRIVERS 00:01:20.600 + sudo modprobe ice 00:01:20.601 + exit 0 00:01:20.610 [Pipeline] } 00:01:20.627 [Pipeline] // withEnv 00:01:20.633 [Pipeline] } 00:01:20.649 [Pipeline] // stage 00:01:20.658 [Pipeline] catchError 00:01:20.660 [Pipeline] { 00:01:20.674 [Pipeline] timeout 00:01:20.674 Timeout set to expire in 50 min 00:01:20.675 [Pipeline] { 00:01:20.691 [Pipeline] stage 00:01:20.693 [Pipeline] { (Tests) 00:01:20.707 [Pipeline] sh 00:01:20.986 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.986 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.986 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.986 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:20.986 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.986 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.986 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:20.986 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.986 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.986 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.986 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:20.986 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.986 + source /etc/os-release 00:01:20.986 ++ NAME='Fedora Linux' 00:01:20.986 ++ VERSION='38 (Cloud Edition)' 00:01:20.986 ++ ID=fedora 00:01:20.986 ++ VERSION_ID=38 00:01:20.986 ++ VERSION_CODENAME= 00:01:20.986 ++ PLATFORM_ID=platform:f38 00:01:20.986 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:20.986 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.986 ++ LOGO=fedora-logo-icon 00:01:20.986 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:20.986 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.986 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:20.986 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.986 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.986 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.986 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:20.986 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.986 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:20.986 ++ SUPPORT_END=2024-05-14 00:01:20.986 ++ VARIANT='Cloud Edition' 00:01:20.986 ++ VARIANT_ID=cloud 00:01:20.986 + uname -a 00:01:20.986 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:20.986 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:21.923 Hugepages 00:01:21.923 node hugesize free / total 00:01:21.923 node0 1048576kB 0 / 0 00:01:21.923 node0 2048kB 0 / 0 00:01:21.923 node1 1048576kB 0 / 0 00:01:21.923 node1 2048kB 0 / 0 00:01:21.923 00:01:21.923 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:21.923 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:21.923 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:21.923 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:21.923 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:21.923 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:21.923 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:21.923 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:21.923 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:21.923 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:21.923 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:21.923 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:21.923 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:21.923 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:21.923 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:21.923 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:21.923 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:21.923 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:21.923 + rm -f /tmp/spdk-ld-path 00:01:21.923 + source autorun-spdk.conf 00:01:21.923 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.923 ++ SPDK_TEST_NVMF=1 00:01:21.923 ++ SPDK_TEST_NVME_CLI=1 00:01:21.923 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.923 ++ SPDK_TEST_NVMF_NICS=e810 00:01:21.923 ++ SPDK_TEST_VFIOUSER=1 00:01:21.923 ++ SPDK_RUN_UBSAN=1 00:01:21.923 ++ NET_TYPE=phy 00:01:21.923 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:21.923 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:21.923 ++ RUN_NIGHTLY=1 00:01:21.924 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:21.924 + [[ -n '' ]] 00:01:21.924 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.182 + for M in /var/spdk/build-*-manifest.txt 00:01:22.182 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.182 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:22.182 + for M in /var/spdk/build-*-manifest.txt 00:01:22.182 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.182 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:22.182 ++ uname 00:01:22.182 + [[ Linux == \L\i\n\u\x ]] 00:01:22.183 + sudo dmesg -T 00:01:22.183 + sudo dmesg --clear 00:01:22.183 + dmesg_pid=1148279 00:01:22.183 + [[ Fedora Linux == FreeBSD ]] 00:01:22.183 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.183 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.183 + sudo dmesg -Tw 00:01:22.183 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.183 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.183 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.183 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.183 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.183 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.183 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.183 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.183 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.183 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.183 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.183 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.183 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:22.183 Test configuration: 00:01:22.183 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.183 SPDK_TEST_NVMF=1 00:01:22.183 SPDK_TEST_NVME_CLI=1 00:01:22.183 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.183 SPDK_TEST_NVMF_NICS=e810 00:01:22.183 SPDK_TEST_VFIOUSER=1 00:01:22.183 SPDK_RUN_UBSAN=1 00:01:22.183 NET_TYPE=phy 00:01:22.183 SPDK_TEST_NATIVE_DPDK=main 00:01:22.183 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.183 RUN_NIGHTLY=1 23:07:19 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:22.183 23:07:19 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.183 23:07:19 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.183 23:07:19 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.183 23:07:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.183 23:07:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.183 23:07:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.183 23:07:19 -- paths/export.sh@5 -- $ export PATH 00:01:22.183 23:07:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.183 23:07:19 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:22.183 23:07:19 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:22.183 23:07:19 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721941639.XXXXXX 00:01:22.183 23:07:19 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721941639.3zlurF 00:01:22.183 23:07:19 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:22.183 23:07:19 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:01:22.183 23:07:19 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.183 23:07:19 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:22.183 23:07:19 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:22.183 23:07:19 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:22.183 23:07:19 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:22.183 23:07:19 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:22.183 23:07:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.183 23:07:19 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:22.183 23:07:19 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:22.183 23:07:19 -- pm/common@17 -- $ local monitor 00:01:22.183 23:07:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.183 23:07:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.183 23:07:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.183 23:07:19 -- pm/common@21 -- $ date +%s 00:01:22.183 23:07:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.183 23:07:19 -- pm/common@21 -- $ date +%s 00:01:22.183 23:07:19 -- pm/common@25 -- $ sleep 1 00:01:22.183 23:07:19 -- pm/common@21 -- $ date +%s 00:01:22.183 23:07:19 -- pm/common@21 -- $ date +%s 00:01:22.183 23:07:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721941639 00:01:22.183 23:07:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721941639 00:01:22.183 23:07:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721941639 00:01:22.183 23:07:19 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721941639 00:01:22.183 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721941639_collect-vmstat.pm.log 00:01:22.183 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721941639_collect-cpu-load.pm.log 00:01:22.183 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721941639_collect-cpu-temp.pm.log 00:01:22.183 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721941639_collect-bmc-pm.bmc.pm.log 00:01:23.116 23:07:20 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:23.116 23:07:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:23.116 23:07:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:23.116 23:07:20 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.116 23:07:20 -- spdk/autobuild.sh@16 -- $ date -u 00:01:23.116 Thu Jul 25 09:07:20 PM UTC 2024 00:01:23.116 23:07:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:23.116 v24.09-pre-321-g704257090 00:01:23.116 23:07:20 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:23.116 23:07:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:23.116 23:07:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:23.116 23:07:20 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:23.116 23:07:20 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:23.116 23:07:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.116 ************************************ 00:01:23.116 START TEST ubsan 00:01:23.116 ************************************ 00:01:23.116 23:07:20 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:23.116 using ubsan 00:01:23.116 00:01:23.116 real 0m0.000s 00:01:23.116 user 0m0.000s 00:01:23.116 sys 0m0.000s 00:01:23.116 23:07:20 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:23.116 23:07:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:23.116 ************************************ 00:01:23.116 END TEST ubsan 00:01:23.116 ************************************ 00:01:23.374 23:07:20 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:23.374 23:07:20 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:23.374 23:07:20 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:23.374 23:07:20 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:23.374 23:07:20 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:23.374 23:07:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.374 ************************************ 00:01:23.374 START TEST build_native_dpdk 00:01:23.374 ************************************ 00:01:23.374 23:07:20 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:23.374 23:07:20 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:23.374 23:07:20 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:23.374 23:07:20 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:23.374 23:07:20 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:23.374 23:07:20 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:23.374 23:07:20 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:23.374 23:07:20 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:23.374 23:07:20 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:23.374 23:07:20 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:23.374 23:07:20 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:23.374 23:07:20 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:23.374 23:07:20 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:23.374 23:07:20 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:23.374 23:07:20 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:23.374 23:07:20 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:23.375 82c47f005b version: 24.07-rc3 00:01:23.375 d9d1be537e doc: remove reference to mbuf pkt field 00:01:23.375 52c7393a03 doc: set required MinGW version in Windows guide 00:01:23.375 92439dc9ac dts: improve starting and stopping interactive shells 00:01:23.375 2b648cd4e4 dts: add context manager for interactive shells 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc3 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc3 21.11.0 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc3 '<' 21.11.0 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:23.375 patching file config/rte_config.h 00:01:23.375 Hunk #1 succeeded at 70 (offset 11 lines). 00:01:23.375 23:07:20 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.07.0-rc3 24.07.0 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc3 '<' 24.07.0 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 07 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=7 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 07 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=7 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 0 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 0 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@362 -- $ decimal rc3 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@350 -- $ local d=rc3 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@351 -- $ [[ rc3 =~ ^[0-9]+$ ]] 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc3 =~ ^0x ]] 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc3 =~ ^[a-f0-9]+$ ]] 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@363 -- $ decimal '' 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@350 -- $ local d= 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@351 -- $ [[ '' =~ ^[0-9]+$ ]] 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^0x ]] 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^[a-f0-9]+$ ]] 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@367 -- $ [[ 24 7 0 0 == \2\4\ \7\ \0\ \0 ]] 00:01:23.375 23:07:20 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:23.376 23:07:20 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:23.376 23:07:20 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:23.376 23:07:20 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:23.376 23:07:20 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:23.376 23:07:20 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:27.560 The Meson build system 00:01:27.560 Version: 1.3.1 00:01:27.560 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:27.560 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:27.560 Build type: native build 00:01:27.560 Program cat found: YES (/usr/bin/cat) 00:01:27.560 Project name: DPDK 00:01:27.560 Project version: 24.07.0-rc3 00:01:27.560 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:27.560 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:27.560 Host machine cpu family: x86_64 00:01:27.560 Host machine cpu: x86_64 00:01:27.560 Message: ## Building in Developer Mode ## 00:01:27.560 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:27.560 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:27.560 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:27.560 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:27.560 Program cat found: YES (/usr/bin/cat) 00:01:27.560 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:27.560 Compiler for C supports arguments -march=native: YES 00:01:27.560 Checking for size of "void *" : 8 00:01:27.560 Checking for size of "void *" : 8 (cached) 00:01:27.560 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:27.560 Library m found: YES 00:01:27.560 Library numa found: YES 00:01:27.560 Has header "numaif.h" : YES 00:01:27.560 Library fdt found: NO 00:01:27.560 Library execinfo found: NO 00:01:27.560 Has header "execinfo.h" : YES 00:01:27.560 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:27.560 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:27.560 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:27.560 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:27.560 Run-time dependency openssl found: YES 3.0.9 00:01:27.560 Run-time dependency libpcap found: YES 1.10.4 00:01:27.560 Has header "pcap.h" with dependency libpcap: YES 00:01:27.560 Compiler for C supports arguments -Wcast-qual: YES 00:01:27.560 Compiler for C supports arguments -Wdeprecated: YES 00:01:27.560 Compiler for C supports arguments -Wformat: YES 00:01:27.560 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:27.560 Compiler for C supports arguments -Wformat-security: NO 00:01:27.560 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:27.560 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:27.560 Compiler for C supports arguments -Wnested-externs: YES 00:01:27.560 Compiler for C supports arguments -Wold-style-definition: YES 00:01:27.560 Compiler for C supports arguments -Wpointer-arith: YES 00:01:27.560 Compiler for C supports arguments -Wsign-compare: YES 00:01:27.560 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:27.560 Compiler for C supports arguments -Wundef: YES 00:01:27.560 Compiler for C supports arguments -Wwrite-strings: YES 00:01:27.560 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:27.560 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:27.560 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:27.560 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:27.560 Program objdump found: YES (/usr/bin/objdump) 00:01:27.560 Compiler for C supports arguments -mavx512f: YES 00:01:27.560 Checking if "AVX512 checking" compiles: YES 00:01:27.560 Fetching value of define "__SSE4_2__" : 1 00:01:27.560 Fetching value of define "__AES__" : 1 00:01:27.560 Fetching value of define "__AVX__" : 1 00:01:27.560 Fetching value of define "__AVX2__" : (undefined) 00:01:27.560 Fetching value of define "__AVX512BW__" : (undefined) 00:01:27.560 Fetching value of define "__AVX512CD__" : (undefined) 00:01:27.560 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:27.560 Fetching value of define "__AVX512F__" : (undefined) 00:01:27.560 Fetching value of define "__AVX512VL__" : (undefined) 00:01:27.560 Fetching value of define "__PCLMUL__" : 1 00:01:27.560 Fetching value of define "__RDRND__" : 1 00:01:27.560 Fetching value of define "__RDSEED__" : (undefined) 00:01:27.560 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:27.560 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:27.560 Message: lib/log: Defining dependency "log" 00:01:27.560 Message: lib/kvargs: Defining dependency "kvargs" 00:01:27.560 Message: lib/argparse: Defining dependency "argparse" 00:01:27.560 Message: lib/telemetry: Defining dependency "telemetry" 00:01:27.560 Checking for function "getentropy" : NO 00:01:27.560 Message: lib/eal: Defining dependency "eal" 00:01:27.560 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:01:27.560 Message: lib/ring: Defining dependency "ring" 00:01:27.560 Message: lib/rcu: Defining dependency "rcu" 00:01:27.560 Message: lib/mempool: Defining dependency "mempool" 00:01:27.560 Message: lib/mbuf: Defining dependency "mbuf" 00:01:27.560 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:27.560 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:27.560 Compiler for C supports arguments -mpclmul: YES 00:01:27.560 Compiler for C supports arguments -maes: YES 00:01:27.560 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:27.560 Compiler for C supports arguments -mavx512bw: YES 00:01:27.560 Compiler for C supports arguments -mavx512dq: YES 00:01:27.560 Compiler for C supports arguments -mavx512vl: YES 00:01:27.560 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:27.560 Compiler for C supports arguments -mavx2: YES 00:01:27.560 Compiler for C supports arguments -mavx: YES 00:01:27.560 Message: lib/net: Defining dependency "net" 00:01:27.560 Message: lib/meter: Defining dependency "meter" 00:01:27.560 Message: lib/ethdev: Defining dependency "ethdev" 00:01:27.560 Message: lib/pci: Defining dependency "pci" 00:01:27.560 Message: lib/cmdline: Defining dependency "cmdline" 00:01:27.560 Message: lib/metrics: Defining dependency "metrics" 00:01:27.560 Message: lib/hash: Defining dependency "hash" 00:01:27.560 Message: lib/timer: Defining dependency "timer" 00:01:27.560 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:27.560 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:27.560 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:27.560 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:27.560 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:27.560 Message: lib/acl: Defining dependency "acl" 00:01:27.560 Message: lib/bbdev: Defining dependency "bbdev" 00:01:27.560 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:27.560 Run-time dependency libelf found: YES 0.190 00:01:27.560 Message: lib/bpf: Defining dependency "bpf" 00:01:27.560 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:27.560 Message: lib/compressdev: Defining dependency "compressdev" 00:01:27.560 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:27.560 Message: lib/distributor: Defining dependency "distributor" 00:01:27.560 Message: lib/dmadev: Defining dependency "dmadev" 00:01:27.560 Message: lib/efd: Defining dependency "efd" 00:01:27.560 Message: lib/eventdev: Defining dependency "eventdev" 00:01:27.560 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:27.560 Message: lib/gpudev: Defining dependency "gpudev" 00:01:27.560 Message: lib/gro: Defining dependency "gro" 00:01:27.560 Message: lib/gso: Defining dependency "gso" 00:01:27.560 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:27.560 Message: lib/jobstats: Defining dependency "jobstats" 00:01:27.560 Message: lib/latencystats: Defining dependency "latencystats" 00:01:27.561 Message: lib/lpm: Defining dependency "lpm" 00:01:27.561 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:27.561 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:27.561 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:27.561 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:27.561 Message: lib/member: Defining dependency "member" 00:01:27.561 Message: lib/pcapng: Defining dependency "pcapng" 00:01:27.561 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:27.561 Message: lib/power: Defining dependency "power" 00:01:27.561 Message: lib/rawdev: Defining dependency "rawdev" 00:01:27.561 Message: lib/regexdev: Defining dependency "regexdev" 00:01:27.561 Message: lib/mldev: Defining dependency "mldev" 00:01:27.561 Message: lib/rib: Defining dependency "rib" 00:01:27.561 Message: lib/reorder: Defining dependency "reorder" 00:01:27.561 Message: lib/sched: Defining dependency "sched" 00:01:27.561 Message: lib/security: Defining dependency "security" 00:01:27.561 Message: lib/stack: Defining dependency "stack" 00:01:27.561 Has header "linux/userfaultfd.h" : YES 00:01:27.561 Has header "linux/vduse.h" : YES 00:01:27.561 Message: lib/vhost: Defining dependency "vhost" 00:01:27.561 Message: lib/ipsec: Defining dependency "ipsec" 00:01:27.561 Message: lib/pdcp: Defining dependency "pdcp" 00:01:27.561 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:27.561 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:27.561 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:27.561 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:27.561 Message: lib/fib: Defining dependency "fib" 00:01:27.561 Message: lib/port: Defining dependency "port" 00:01:27.561 Message: lib/pdump: Defining dependency "pdump" 00:01:27.561 Message: lib/table: Defining dependency "table" 00:01:27.561 Message: lib/pipeline: Defining dependency "pipeline" 00:01:27.561 Message: lib/graph: Defining dependency "graph" 00:01:27.561 Message: lib/node: Defining dependency "node" 00:01:29.464 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:29.464 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:29.464 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:29.464 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:29.464 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:29.464 Compiler for C supports arguments -Wno-unused-value: YES 00:01:29.464 Compiler for C supports arguments -Wno-format: YES 00:01:29.464 Compiler for C supports arguments -Wno-format-security: YES 00:01:29.464 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:29.464 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:29.464 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:29.464 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:29.464 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.464 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:29.464 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:29.464 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:29.464 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:29.464 Has header "sys/epoll.h" : YES 00:01:29.464 Program doxygen found: YES (/usr/bin/doxygen) 00:01:29.464 Configuring doxy-api-html.conf using configuration 00:01:29.464 Configuring doxy-api-man.conf using configuration 00:01:29.464 Program mandb found: YES (/usr/bin/mandb) 00:01:29.464 Program sphinx-build found: NO 00:01:29.464 Configuring rte_build_config.h using configuration 00:01:29.464 Message: 00:01:29.464 ================= 00:01:29.464 Applications Enabled 00:01:29.464 ================= 00:01:29.464 00:01:29.464 apps: 00:01:29.464 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:29.464 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:29.464 test-pmd, test-regex, test-sad, test-security-perf, 00:01:29.464 00:01:29.464 Message: 00:01:29.464 ================= 00:01:29.464 Libraries Enabled 00:01:29.464 ================= 00:01:29.464 00:01:29.464 libs: 00:01:29.464 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:01:29.464 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:01:29.464 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:01:29.464 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:01:29.464 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:01:29.464 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:01:29.464 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:01:29.464 graph, node, 00:01:29.464 00:01:29.464 Message: 00:01:29.464 =============== 00:01:29.464 Drivers Enabled 00:01:29.464 =============== 00:01:29.464 00:01:29.464 common: 00:01:29.464 00:01:29.464 bus: 00:01:29.464 pci, vdev, 00:01:29.464 mempool: 00:01:29.464 ring, 00:01:29.464 dma: 00:01:29.464 00:01:29.464 net: 00:01:29.464 i40e, 00:01:29.464 raw: 00:01:29.464 00:01:29.464 crypto: 00:01:29.464 00:01:29.464 compress: 00:01:29.464 00:01:29.464 regex: 00:01:29.464 00:01:29.464 ml: 00:01:29.464 00:01:29.464 vdpa: 00:01:29.464 00:01:29.464 event: 00:01:29.464 00:01:29.464 baseband: 00:01:29.464 00:01:29.464 gpu: 00:01:29.464 00:01:29.464 00:01:29.464 Message: 00:01:29.464 ================= 00:01:29.464 Content Skipped 00:01:29.464 ================= 00:01:29.464 00:01:29.464 apps: 00:01:29.464 00:01:29.464 libs: 00:01:29.464 00:01:29.464 drivers: 00:01:29.464 common/cpt: not in enabled drivers build config 00:01:29.464 common/dpaax: not in enabled drivers build config 00:01:29.464 common/iavf: not in enabled drivers build config 00:01:29.465 common/idpf: not in enabled drivers build config 00:01:29.465 common/ionic: not in enabled drivers build config 00:01:29.465 common/mvep: not in enabled drivers build config 00:01:29.465 common/octeontx: not in enabled drivers build config 00:01:29.465 bus/auxiliary: not in enabled drivers build config 00:01:29.465 bus/cdx: not in enabled drivers build config 00:01:29.465 bus/dpaa: not in enabled drivers build config 00:01:29.465 bus/fslmc: not in enabled drivers build config 00:01:29.465 bus/ifpga: not in enabled drivers build config 00:01:29.465 bus/platform: not in enabled drivers build config 00:01:29.465 bus/uacce: not in enabled drivers build config 00:01:29.465 bus/vmbus: not in enabled drivers build config 00:01:29.465 common/cnxk: not in enabled drivers build config 00:01:29.465 common/mlx5: not in enabled drivers build config 00:01:29.465 common/nfp: not in enabled drivers build config 00:01:29.465 common/nitrox: not in enabled drivers build config 00:01:29.465 common/qat: not in enabled drivers build config 00:01:29.465 common/sfc_efx: not in enabled drivers build config 00:01:29.465 mempool/bucket: not in enabled drivers build config 00:01:29.465 mempool/cnxk: not in enabled drivers build config 00:01:29.465 mempool/dpaa: not in enabled drivers build config 00:01:29.465 mempool/dpaa2: not in enabled drivers build config 00:01:29.465 mempool/octeontx: not in enabled drivers build config 00:01:29.465 mempool/stack: not in enabled drivers build config 00:01:29.465 dma/cnxk: not in enabled drivers build config 00:01:29.465 dma/dpaa: not in enabled drivers build config 00:01:29.465 dma/dpaa2: not in enabled drivers build config 00:01:29.465 dma/hisilicon: not in enabled drivers build config 00:01:29.465 dma/idxd: not in enabled drivers build config 00:01:29.465 dma/ioat: not in enabled drivers build config 00:01:29.465 dma/odm: not in enabled drivers build config 00:01:29.465 dma/skeleton: not in enabled drivers build config 00:01:29.465 net/af_packet: not in enabled drivers build config 00:01:29.465 net/af_xdp: not in enabled drivers build config 00:01:29.465 net/ark: not in enabled drivers build config 00:01:29.465 net/atlantic: not in enabled drivers build config 00:01:29.465 net/avp: not in enabled drivers build config 00:01:29.465 net/axgbe: not in enabled drivers build config 00:01:29.465 net/bnx2x: not in enabled drivers build config 00:01:29.465 net/bnxt: not in enabled drivers build config 00:01:29.465 net/bonding: not in enabled drivers build config 00:01:29.465 net/cnxk: not in enabled drivers build config 00:01:29.465 net/cpfl: not in enabled drivers build config 00:01:29.465 net/cxgbe: not in enabled drivers build config 00:01:29.465 net/dpaa: not in enabled drivers build config 00:01:29.465 net/dpaa2: not in enabled drivers build config 00:01:29.465 net/e1000: not in enabled drivers build config 00:01:29.465 net/ena: not in enabled drivers build config 00:01:29.465 net/enetc: not in enabled drivers build config 00:01:29.465 net/enetfec: not in enabled drivers build config 00:01:29.465 net/enic: not in enabled drivers build config 00:01:29.465 net/failsafe: not in enabled drivers build config 00:01:29.465 net/fm10k: not in enabled drivers build config 00:01:29.465 net/gve: not in enabled drivers build config 00:01:29.465 net/hinic: not in enabled drivers build config 00:01:29.465 net/hns3: not in enabled drivers build config 00:01:29.465 net/iavf: not in enabled drivers build config 00:01:29.465 net/ice: not in enabled drivers build config 00:01:29.465 net/idpf: not in enabled drivers build config 00:01:29.465 net/igc: not in enabled drivers build config 00:01:29.465 net/ionic: not in enabled drivers build config 00:01:29.465 net/ipn3ke: not in enabled drivers build config 00:01:29.465 net/ixgbe: not in enabled drivers build config 00:01:29.465 net/mana: not in enabled drivers build config 00:01:29.465 net/memif: not in enabled drivers build config 00:01:29.465 net/mlx4: not in enabled drivers build config 00:01:29.465 net/mlx5: not in enabled drivers build config 00:01:29.465 net/mvneta: not in enabled drivers build config 00:01:29.465 net/mvpp2: not in enabled drivers build config 00:01:29.465 net/netvsc: not in enabled drivers build config 00:01:29.465 net/nfb: not in enabled drivers build config 00:01:29.465 net/nfp: not in enabled drivers build config 00:01:29.465 net/ngbe: not in enabled drivers build config 00:01:29.465 net/ntnic: not in enabled drivers build config 00:01:29.465 net/null: not in enabled drivers build config 00:01:29.465 net/octeontx: not in enabled drivers build config 00:01:29.465 net/octeon_ep: not in enabled drivers build config 00:01:29.465 net/pcap: not in enabled drivers build config 00:01:29.465 net/pfe: not in enabled drivers build config 00:01:29.465 net/qede: not in enabled drivers build config 00:01:29.465 net/ring: not in enabled drivers build config 00:01:29.465 net/sfc: not in enabled drivers build config 00:01:29.465 net/softnic: not in enabled drivers build config 00:01:29.465 net/tap: not in enabled drivers build config 00:01:29.465 net/thunderx: not in enabled drivers build config 00:01:29.465 net/txgbe: not in enabled drivers build config 00:01:29.465 net/vdev_netvsc: not in enabled drivers build config 00:01:29.465 net/vhost: not in enabled drivers build config 00:01:29.465 net/virtio: not in enabled drivers build config 00:01:29.465 net/vmxnet3: not in enabled drivers build config 00:01:29.465 raw/cnxk_bphy: not in enabled drivers build config 00:01:29.465 raw/cnxk_gpio: not in enabled drivers build config 00:01:29.465 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:29.465 raw/ifpga: not in enabled drivers build config 00:01:29.465 raw/ntb: not in enabled drivers build config 00:01:29.465 raw/skeleton: not in enabled drivers build config 00:01:29.465 crypto/armv8: not in enabled drivers build config 00:01:29.465 crypto/bcmfs: not in enabled drivers build config 00:01:29.465 crypto/caam_jr: not in enabled drivers build config 00:01:29.465 crypto/ccp: not in enabled drivers build config 00:01:29.465 crypto/cnxk: not in enabled drivers build config 00:01:29.465 crypto/dpaa_sec: not in enabled drivers build config 00:01:29.465 crypto/dpaa2_sec: not in enabled drivers build config 00:01:29.465 crypto/ionic: not in enabled drivers build config 00:01:29.465 crypto/ipsec_mb: not in enabled drivers build config 00:01:29.465 crypto/mlx5: not in enabled drivers build config 00:01:29.465 crypto/mvsam: not in enabled drivers build config 00:01:29.465 crypto/nitrox: not in enabled drivers build config 00:01:29.465 crypto/null: not in enabled drivers build config 00:01:29.465 crypto/octeontx: not in enabled drivers build config 00:01:29.465 crypto/openssl: not in enabled drivers build config 00:01:29.465 crypto/scheduler: not in enabled drivers build config 00:01:29.465 crypto/uadk: not in enabled drivers build config 00:01:29.465 crypto/virtio: not in enabled drivers build config 00:01:29.465 compress/isal: not in enabled drivers build config 00:01:29.465 compress/mlx5: not in enabled drivers build config 00:01:29.465 compress/nitrox: not in enabled drivers build config 00:01:29.465 compress/octeontx: not in enabled drivers build config 00:01:29.465 compress/uadk: not in enabled drivers build config 00:01:29.465 compress/zlib: not in enabled drivers build config 00:01:29.465 regex/mlx5: not in enabled drivers build config 00:01:29.465 regex/cn9k: not in enabled drivers build config 00:01:29.465 ml/cnxk: not in enabled drivers build config 00:01:29.465 vdpa/ifc: not in enabled drivers build config 00:01:29.465 vdpa/mlx5: not in enabled drivers build config 00:01:29.465 vdpa/nfp: not in enabled drivers build config 00:01:29.465 vdpa/sfc: not in enabled drivers build config 00:01:29.465 event/cnxk: not in enabled drivers build config 00:01:29.465 event/dlb2: not in enabled drivers build config 00:01:29.465 event/dpaa: not in enabled drivers build config 00:01:29.465 event/dpaa2: not in enabled drivers build config 00:01:29.465 event/dsw: not in enabled drivers build config 00:01:29.465 event/opdl: not in enabled drivers build config 00:01:29.465 event/skeleton: not in enabled drivers build config 00:01:29.465 event/sw: not in enabled drivers build config 00:01:29.465 event/octeontx: not in enabled drivers build config 00:01:29.465 baseband/acc: not in enabled drivers build config 00:01:29.465 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:29.465 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:29.465 baseband/la12xx: not in enabled drivers build config 00:01:29.465 baseband/null: not in enabled drivers build config 00:01:29.465 baseband/turbo_sw: not in enabled drivers build config 00:01:29.465 gpu/cuda: not in enabled drivers build config 00:01:29.465 00:01:29.465 00:01:29.465 Build targets in project: 224 00:01:29.465 00:01:29.465 DPDK 24.07.0-rc3 00:01:29.465 00:01:29.465 User defined options 00:01:29.465 libdir : lib 00:01:29.465 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.465 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:29.465 c_link_args : 00:01:29.465 enable_docs : false 00:01:29.465 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:29.465 enable_kmods : false 00:01:29.465 machine : native 00:01:29.465 tests : false 00:01:29.465 00:01:29.465 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:29.465 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:29.465 23:07:26 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:29.465 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:29.465 [1/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:29.465 [2/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:29.465 [3/723] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:29.465 [4/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:29.465 [5/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:29.466 [6/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:29.466 [7/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:29.466 [8/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:29.466 [9/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:29.466 [10/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:29.466 [11/723] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:29.466 [12/723] Linking static target lib/librte_kvargs.a 00:01:29.466 [13/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:29.725 [14/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:29.725 [15/723] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:29.725 [16/723] Linking static target lib/librte_log.a 00:01:29.725 [17/723] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:29.725 [18/723] Linking static target lib/librte_argparse.a 00:01:29.986 [19/723] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.250 [20/723] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.519 [21/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:30.519 [22/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:30.519 [23/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:30.519 [24/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:30.519 [25/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:30.519 [26/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:30.519 [27/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:30.519 [28/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:30.519 [29/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:30.519 [30/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:30.519 [31/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:30.519 [32/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:30.519 [33/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:30.519 [34/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:30.519 [35/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:30.519 [36/723] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.519 [37/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:30.519 [38/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:30.519 [39/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:30.519 [40/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:30.519 [41/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:30.519 [42/723] Linking target lib/librte_log.so.24.2 00:01:30.519 [43/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:30.519 [44/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:30.519 [45/723] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:30.519 [46/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:30.519 [47/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:30.519 [48/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:30.519 [49/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:30.781 [50/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:30.781 [51/723] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:30.781 [52/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:30.781 [53/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:30.781 [54/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:30.781 [55/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:30.781 [56/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:30.781 [57/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:30.781 [58/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:30.781 [59/723] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:01:30.781 [60/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:30.781 [61/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:30.781 [62/723] Linking target lib/librte_argparse.so.24.2 00:01:30.781 [63/723] Linking target lib/librte_kvargs.so.24.2 00:01:31.043 [64/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:31.043 [65/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:31.043 [66/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:31.043 [67/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:31.043 [68/723] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:01:31.043 [69/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:31.305 [70/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:31.305 [71/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:31.305 [72/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:31.305 [73/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:31.565 [74/723] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:31.565 [75/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:31.565 [76/723] Linking static target lib/librte_pci.a 00:01:31.565 [77/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:01:31.565 [78/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:31.565 [79/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:31.565 [80/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:31.565 [81/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:31.827 [82/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:31.827 [83/723] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:31.827 [84/723] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:31.827 [85/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:31.827 [86/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:31.827 [87/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:31.827 [88/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:31.827 [89/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:31.827 [90/723] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:31.827 [91/723] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:31.827 [92/723] Linking static target lib/librte_ring.a 00:01:31.827 [93/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:31.827 [94/723] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:31.827 [95/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:31.827 [96/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:31.827 [97/723] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:31.827 [98/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:31.827 [99/723] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:31.827 [100/723] Linking static target lib/librte_meter.a 00:01:31.827 [101/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:31.827 [102/723] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.827 [103/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:31.827 [104/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:31.827 [105/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:31.827 [106/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:31.827 [107/723] Linking static target lib/librte_telemetry.a 00:01:31.827 [108/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:32.090 [109/723] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:32.090 [110/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:32.090 [111/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:32.090 [112/723] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:32.090 [113/723] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:32.090 [114/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:32.090 [115/723] Linking static target lib/librte_net.a 00:01:32.090 [116/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:32.352 [117/723] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.352 [118/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:32.352 [119/723] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.352 [120/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:32.352 [121/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:32.352 [122/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:32.352 [123/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:32.352 [124/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:32.614 [125/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:32.614 [126/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:32.614 [127/723] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.614 [128/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:32.614 [129/723] Linking static target lib/librte_mempool.a 00:01:32.614 [130/723] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.614 [131/723] Linking target lib/librte_telemetry.so.24.2 00:01:32.614 [132/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:32.614 [133/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:32.614 [134/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:32.614 [135/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:32.878 [136/723] Linking static target lib/librte_eal.a 00:01:32.878 [137/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:32.878 [138/723] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:01:32.878 [139/723] Linking static target lib/librte_cmdline.a 00:01:32.878 [140/723] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:32.878 [141/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:32.878 [142/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:32.878 [143/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:33.140 [144/723] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:33.140 [145/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:33.140 [146/723] Linking static target lib/librte_cfgfile.a 00:01:33.140 [147/723] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:33.140 [148/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:33.140 [149/723] Linking static target lib/librte_metrics.a 00:01:33.140 [150/723] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:33.140 [151/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:33.140 [152/723] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:33.140 [153/723] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:33.140 [154/723] Linking static target lib/librte_rcu.a 00:01:33.403 [155/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:33.403 [156/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:33.403 [157/723] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:33.403 [158/723] Linking static target lib/librte_bitratestats.a 00:01:33.403 [159/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:33.403 [160/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:33.403 [161/723] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:33.668 [162/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:33.668 [163/723] Linking static target lib/librte_mbuf.a 00:01:33.668 [164/723] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.668 [165/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:33.668 [166/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:33.668 [167/723] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.668 [168/723] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.668 [169/723] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.668 [170/723] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:33.668 [171/723] Linking static target lib/librte_timer.a 00:01:33.668 [172/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:33.668 [173/723] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.934 [174/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:33.934 [175/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:33.934 [176/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:33.934 [177/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:33.934 [178/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:33.934 [179/723] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:33.934 [180/723] Linking static target lib/librte_bbdev.a 00:01:34.203 [181/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:34.203 [182/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:34.203 [183/723] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.203 [184/723] Linking static target lib/librte_compressdev.a 00:01:34.203 [185/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:34.203 [186/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:34.203 [187/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:34.203 [188/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:34.203 [189/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:34.466 [190/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:34.466 [191/723] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.466 [192/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:34.466 [193/723] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.727 [194/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:35.013 [195/723] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.013 [196/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:35.013 [197/723] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.013 [198/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:35.013 [199/723] Linking static target lib/librte_distributor.a 00:01:35.013 [200/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:35.013 [201/723] Linking static target lib/librte_dmadev.a 00:01:35.013 [202/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:35.284 [203/723] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:35.284 [204/723] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:35.284 [205/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:35.284 [206/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:35.284 [207/723] Linking static target lib/librte_bpf.a 00:01:35.284 [208/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:35.284 [209/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:35.284 [210/723] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:35.284 [211/723] Linking static target lib/librte_dispatcher.a 00:01:35.284 [212/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:35.284 [213/723] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:35.284 [214/723] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:35.284 [215/723] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:35.284 [216/723] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.284 [217/723] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:35.284 [218/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:35.284 [219/723] Linking static target lib/librte_gpudev.a 00:01:35.284 [220/723] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:35.542 [221/723] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:35.542 [222/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:35.542 [223/723] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:35.542 [224/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:35.542 [225/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:35.542 [226/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:35.542 [227/723] Linking static target lib/librte_gro.a 00:01:35.543 [228/723] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:35.543 [229/723] Linking static target lib/librte_jobstats.a 00:01:35.543 [230/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:35.543 [231/723] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:35.543 [232/723] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:35.803 [233/723] Linking static target lib/librte_gso.a 00:01:35.803 [234/723] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.803 [235/723] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.803 [236/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:35.803 [237/723] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.803 [238/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:35.803 [239/723] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:36.065 [240/723] Linking static target lib/librte_latencystats.a 00:01:36.065 [241/723] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.065 [242/723] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.065 [243/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:36.065 [244/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:36.065 [245/723] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.065 [246/723] Linking static target lib/librte_ip_frag.a 00:01:36.065 [247/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:36.065 [248/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:36.065 [249/723] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:36.065 [250/723] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:36.327 [251/723] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:36.327 [252/723] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:36.327 [253/723] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:36.327 [254/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:36.327 [255/723] Linking static target lib/librte_efd.a 00:01:36.327 [256/723] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.327 [257/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:36.593 [258/723] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.593 [259/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:36.593 [260/723] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:36.593 [261/723] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:36.593 [262/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:36.593 [263/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:36.593 [264/723] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:36.593 [265/723] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.852 [266/723] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.852 [267/723] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:36.852 [268/723] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:36.852 [269/723] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:36.852 [270/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:36.852 [271/723] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:37.117 [272/723] Linking static target lib/librte_regexdev.a 00:01:37.117 [273/723] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:37.117 [274/723] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:37.117 [275/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:37.117 [276/723] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:37.117 [277/723] Linking static target lib/librte_pcapng.a 00:01:37.117 [278/723] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:37.117 [279/723] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:37.117 [280/723] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:37.117 [281/723] Linking static target lib/librte_rawdev.a 00:01:37.117 [282/723] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:37.117 [283/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:37.117 [284/723] Linking static target lib/librte_power.a 00:01:37.117 [285/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:37.117 [286/723] Linking static target lib/librte_lpm.a 00:01:37.117 [287/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:37.376 [288/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:37.376 [289/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:37.376 [290/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:37.376 [291/723] Linking static target lib/librte_stack.a 00:01:37.376 [292/723] Linking static target lib/librte_mldev.a 00:01:37.376 [293/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:37.376 [294/723] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:37.376 [295/723] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.376 [296/723] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:37.639 [297/723] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:37.639 [298/723] Linking static target lib/acl/libavx2_tmp.a 00:01:37.639 [299/723] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:37.639 [300/723] Linking static target lib/librte_reorder.a 00:01:37.639 [301/723] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:37.639 [302/723] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.639 [303/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:37.639 [304/723] Linking static target lib/librte_cryptodev.a 00:01:37.639 [305/723] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:37.639 [306/723] Linking static target lib/librte_security.a 00:01:37.639 [307/723] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.906 [308/723] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:37.906 [309/723] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:37.906 [310/723] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.906 [311/723] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:37.906 [312/723] Linking static target lib/librte_hash.a 00:01:37.906 [313/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:38.169 [314/723] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.169 [315/723] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:38.169 [316/723] Linking static target lib/acl/libavx512_tmp.a 00:01:38.169 [317/723] Linking static target lib/librte_acl.a 00:01:38.169 [318/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:38.169 [319/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:38.169 [320/723] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.169 [321/723] Linking static target lib/librte_rib.a 00:01:38.169 [322/723] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:38.169 [323/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:38.169 [324/723] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.169 [325/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:38.169 [326/723] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:38.169 [327/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:38.169 [328/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:38.435 [329/723] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:38.435 [330/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:38.435 [331/723] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:38.435 [332/723] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:38.435 [333/723] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.435 [334/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:38.435 [335/723] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:38.435 [336/723] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:38.435 [337/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:38.435 [338/723] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:38.435 [339/723] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.695 [340/723] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:38.961 [341/723] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.961 [342/723] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:38.961 [343/723] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.961 [344/723] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:39.531 [345/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:39.531 [346/723] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:39.531 [347/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:39.531 [348/723] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:39.531 [349/723] Linking static target lib/librte_eventdev.a 00:01:39.531 [350/723] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:39.531 [351/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:39.531 [352/723] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:39.531 [353/723] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:39.531 [354/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:39.531 [355/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:39.531 [356/723] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.531 [357/723] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:39.794 [358/723] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.794 [359/723] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:39.794 [360/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:39.794 [361/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:39.794 [362/723] Linking static target lib/librte_member.a 00:01:39.794 [363/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:39.794 [364/723] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:39.794 [365/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:39.794 [366/723] Linking static target lib/librte_sched.a 00:01:39.794 [367/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:39.794 [368/723] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:39.794 [369/723] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:39.794 [370/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:39.794 [371/723] Linking static target lib/librte_fib.a 00:01:39.794 [372/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:39.794 [373/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:39.794 [374/723] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:40.057 [375/723] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:40.057 [376/723] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:40.057 [377/723] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:40.057 [378/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:40.057 [379/723] Linking static target lib/librte_ethdev.a 00:01:40.057 [380/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:40.321 [381/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:40.321 [382/723] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.321 [383/723] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:40.321 [384/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:40.321 [385/723] Linking static target lib/librte_ipsec.a 00:01:40.321 [386/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:40.321 [387/723] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.321 [388/723] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.585 [389/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:40.585 [390/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:40.848 [391/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:40.848 [392/723] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:40.848 [393/723] Linking static target lib/librte_pdump.a 00:01:40.848 [394/723] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:40.848 [395/723] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:40.848 [396/723] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:40.848 [397/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:40.848 [398/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:40.848 [399/723] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.848 [400/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:40.848 [401/723] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:40.848 [402/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:41.109 [403/723] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:41.109 [404/723] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:41.109 [405/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:41.109 [406/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:41.109 [407/723] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:41.109 [408/723] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:41.109 [409/723] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.109 [410/723] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:41.375 [411/723] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:41.375 [412/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:41.375 [413/723] Linking static target lib/librte_pdcp.a 00:01:41.375 [414/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:41.375 [415/723] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:41.375 [416/723] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:41.375 [417/723] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:41.375 [418/723] Linking static target lib/librte_table.a 00:01:41.638 [419/723] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:41.638 [420/723] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:41.638 [421/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:41.900 [422/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:41.900 [423/723] Linking static target lib/librte_graph.a 00:01:41.900 [424/723] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.900 [425/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:41.900 [426/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:41.900 [427/723] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:41.900 [428/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:42.163 [429/723] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:42.163 [430/723] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:42.163 [431/723] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:42.163 [432/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:42.163 [433/723] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:42.163 [434/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:42.163 [435/723] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:42.163 [436/723] Linking static target lib/librte_port.a 00:01:42.163 [437/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:42.163 [438/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:42.163 [439/723] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:42.163 [440/723] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:42.430 [441/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:42.430 [442/723] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.694 [443/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:42.694 [444/723] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:42.694 [445/723] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:42.694 [446/723] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:42.694 [447/723] Linking static target drivers/librte_bus_vdev.a 00:01:42.694 [448/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:42.694 [449/723] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:42.694 [450/723] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.694 [451/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:42.959 [452/723] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.959 [453/723] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:42.959 [454/723] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:42.959 [455/723] Linking static target lib/librte_node.a 00:01:42.959 [456/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:42.959 [457/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:42.959 [458/723] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:42.959 [459/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:42.959 [460/723] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:42.959 [461/723] Linking static target drivers/librte_bus_pci.a 00:01:42.959 [462/723] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:42.959 [463/723] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.222 [464/723] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.222 [465/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:43.222 [466/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:43.222 [467/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:43.222 [468/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:43.491 [469/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:43.491 [470/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:43.491 [471/723] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:43.491 [472/723] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:43.491 [473/723] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:43.491 [474/723] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.491 [475/723] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:43.491 [476/723] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:43.754 [477/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:43.754 [478/723] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.754 [479/723] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:43.754 [480/723] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:43.754 [481/723] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:43.754 [482/723] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:43.754 [483/723] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.754 [484/723] Linking target lib/librte_eal.so.24.2 00:01:43.754 [485/723] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.016 [486/723] Linking static target drivers/librte_mempool_ring.a 00:01:44.016 [487/723] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.016 [488/723] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:44.016 [489/723] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:01:44.016 [490/723] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:44.016 [491/723] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:44.016 [492/723] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:44.281 [493/723] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:44.281 [494/723] Linking target lib/librte_ring.so.24.2 00:01:44.281 [495/723] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:44.281 [496/723] Linking target lib/librte_meter.so.24.2 00:01:44.281 [497/723] Linking target lib/librte_pci.so.24.2 00:01:44.281 [498/723] Linking target lib/librte_timer.so.24.2 00:01:44.281 [499/723] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:44.281 [500/723] Linking target lib/librte_cfgfile.so.24.2 00:01:44.281 [501/723] Linking target lib/librte_acl.so.24.2 00:01:44.281 [502/723] Linking target lib/librte_dmadev.so.24.2 00:01:44.281 [503/723] Linking target lib/librte_jobstats.so.24.2 00:01:44.281 [504/723] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:44.281 [505/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:44.281 [506/723] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:44.281 [507/723] Linking target lib/librte_rawdev.so.24.2 00:01:44.281 [508/723] Linking target lib/librte_stack.so.24.2 00:01:44.545 [509/723] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:01:44.545 [510/723] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:01:44.545 [511/723] Linking target drivers/librte_bus_vdev.so.24.2 00:01:44.545 [512/723] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:01:44.545 [513/723] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:01:44.545 [514/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:44.545 [515/723] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:01:44.545 [516/723] Linking target lib/librte_rcu.so.24.2 00:01:44.545 [517/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:44.545 [518/723] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:01:44.545 [519/723] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:44.545 [520/723] Linking target drivers/librte_bus_pci.so.24.2 00:01:44.545 [521/723] Linking target lib/librte_mempool.so.24.2 00:01:44.545 [522/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:44.545 [523/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:44.545 [524/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:44.805 [525/723] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:01:44.805 [526/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:44.805 [527/723] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:01:44.805 [528/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:44.805 [529/723] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:01:44.805 [530/723] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:01:44.805 [531/723] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:44.805 [532/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:44.805 [533/723] Linking target lib/librte_mbuf.so.24.2 00:01:44.805 [534/723] Linking target drivers/librte_mempool_ring.so.24.2 00:01:44.805 [535/723] Linking target lib/librte_rib.so.24.2 00:01:44.805 [536/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:45.066 [537/723] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:45.066 [538/723] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:45.066 [539/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:45.066 [540/723] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:01:45.066 [541/723] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:01:45.066 [542/723] Linking target lib/librte_net.so.24.2 00:01:45.066 [543/723] Linking target lib/librte_bbdev.so.24.2 00:01:45.066 [544/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:45.328 [545/723] Linking target lib/librte_compressdev.so.24.2 00:01:45.328 [546/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:45.328 [547/723] Linking target lib/librte_cryptodev.so.24.2 00:01:45.328 [548/723] Linking target lib/librte_gpudev.so.24.2 00:01:45.328 [549/723] Linking target lib/librte_distributor.so.24.2 00:01:45.328 [550/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:45.328 [551/723] Linking target lib/librte_regexdev.so.24.2 00:01:45.328 [552/723] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:01:45.328 [553/723] Linking target lib/librte_mldev.so.24.2 00:01:45.328 [554/723] Linking target lib/librte_reorder.so.24.2 00:01:45.328 [555/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:45.328 [556/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:45.328 [557/723] Linking target lib/librte_sched.so.24.2 00:01:45.328 [558/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:45.328 [559/723] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:45.328 [560/723] Linking target lib/librte_hash.so.24.2 00:01:45.328 [561/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:45.328 [562/723] Linking target lib/librte_cmdline.so.24.2 00:01:45.328 [563/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:45.328 [564/723] Linking target lib/librte_fib.so.24.2 00:01:45.597 [565/723] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:01:45.597 [566/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:45.597 [567/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:45.597 [568/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:45.597 [569/723] Linking target lib/librte_security.so.24.2 00:01:45.597 [570/723] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:01:45.597 [571/723] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:01:45.597 [572/723] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:01:45.597 [573/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:45.597 [574/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:45.597 [575/723] Linking target lib/librte_efd.so.24.2 00:01:45.597 [576/723] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:45.859 [577/723] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:45.859 [578/723] Linking target lib/librte_lpm.so.24.2 00:01:45.859 [579/723] Linking target lib/librte_member.so.24.2 00:01:45.859 [580/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:45.859 [581/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:45.859 [582/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:45.859 [583/723] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:01:45.859 [584/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:45.859 [585/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:45.859 [586/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:45.859 [587/723] Linking target lib/librte_ipsec.so.24.2 00:01:45.859 [588/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:45.859 [589/723] Linking target lib/librte_pdcp.so.24.2 00:01:46.127 [590/723] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:01:46.127 [591/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:46.128 [592/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:46.128 [593/723] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:01:46.128 [594/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:46.128 [595/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:46.128 [596/723] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:46.388 [597/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:46.388 [598/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:46.388 [599/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:46.388 [600/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:46.653 [601/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:46.653 [602/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:46.653 [603/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:46.653 [604/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:46.653 [605/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:46.653 [606/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:46.916 [607/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:46.916 [608/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:46.916 [609/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:47.176 [610/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:47.176 [611/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:47.176 [612/723] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:47.176 [613/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:47.176 [614/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:47.176 [615/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:47.176 [616/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:47.176 [617/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:47.176 [618/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:47.176 [619/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:47.434 [620/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:47.434 [621/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:47.434 [622/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:47.434 [623/723] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:47.693 [624/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:47.693 [625/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:47.952 [626/723] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:47.952 [627/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:47.952 [628/723] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:47.952 [629/723] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:47.952 [630/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:47.952 [631/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:47.952 [632/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:47.952 [633/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:47.952 [634/723] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:47.952 [635/723] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:47.952 [636/723] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.952 [637/723] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:48.211 [638/723] Linking target lib/librte_ethdev.so.24.2 00:01:48.211 [639/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:48.211 [640/723] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:48.211 [641/723] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:01:48.211 [642/723] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:48.211 [643/723] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:48.211 [644/723] Linking target lib/librte_gro.so.24.2 00:01:48.211 [645/723] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:01:48.470 [646/723] Linking target lib/librte_pcapng.so.24.2 00:01:48.470 [647/723] Linking target lib/librte_metrics.so.24.2 00:01:48.470 [648/723] Linking target lib/librte_gso.so.24.2 00:01:48.470 [649/723] Linking target lib/librte_ip_frag.so.24.2 00:01:48.470 [650/723] Linking target lib/librte_bpf.so.24.2 00:01:48.470 [651/723] Linking target lib/librte_power.so.24.2 00:01:48.470 [652/723] Linking target lib/librte_eventdev.so.24.2 00:01:48.470 [653/723] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:48.470 [654/723] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:48.470 [655/723] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:01:48.470 [656/723] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:48.470 [657/723] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:01:48.470 [658/723] Linking target lib/librte_bitratestats.so.24.2 00:01:48.470 [659/723] Linking target lib/librte_latencystats.so.24.2 00:01:48.470 [660/723] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:01:48.470 [661/723] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:01:48.470 [662/723] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:01:48.470 [663/723] Linking target lib/librte_dispatcher.so.24.2 00:01:48.470 [664/723] Linking target lib/librte_graph.so.24.2 00:01:48.470 [665/723] Linking target lib/librte_pdump.so.24.2 00:01:48.470 [666/723] Linking target lib/librte_port.so.24.2 00:01:48.728 [667/723] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:48.728 [668/723] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:01:48.728 [669/723] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:01:48.728 [670/723] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:48.728 [671/723] Linking target lib/librte_node.so.24.2 00:01:48.728 [672/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:48.728 [673/723] Linking target lib/librte_table.so.24.2 00:01:48.986 [674/723] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:01:48.987 [675/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:48.987 [676/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:48.987 [677/723] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:48.987 [678/723] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:49.244 [679/723] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:49.502 [680/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:49.760 [681/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:49.760 [682/723] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:49.760 [683/723] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:50.018 [684/723] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:50.018 [685/723] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:50.018 [686/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:50.276 [687/723] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:50.276 [688/723] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:50.276 [689/723] Linking static target drivers/librte_net_i40e.a 00:01:50.533 [690/723] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:50.792 [691/723] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.792 [692/723] Linking target drivers/librte_net_i40e.so.24.2 00:01:51.419 [693/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:51.419 [694/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:51.986 [695/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:00.092 [696/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:00.092 [697/723] Linking static target lib/librte_pipeline.a 00:02:00.350 [698/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:00.350 [699/723] Linking static target lib/librte_vhost.a 00:02:00.915 [700/723] Linking target app/dpdk-test-cmdline 00:02:00.915 [701/723] Linking target app/dpdk-test-dma-perf 00:02:00.915 [702/723] Linking target app/dpdk-test-flow-perf 00:02:00.915 [703/723] Linking target app/dpdk-test-regex 00:02:00.915 [704/723] Linking target app/dpdk-dumpcap 00:02:00.915 [705/723] Linking target app/dpdk-proc-info 00:02:00.915 [706/723] Linking target app/dpdk-pdump 00:02:00.915 [707/723] Linking target app/dpdk-graph 00:02:00.915 [708/723] Linking target app/dpdk-test-acl 00:02:00.915 [709/723] Linking target app/dpdk-test-fib 00:02:00.915 [710/723] Linking target app/dpdk-test-gpudev 00:02:00.915 [711/723] Linking target app/dpdk-test-mldev 00:02:00.915 [712/723] Linking target app/dpdk-test-pipeline 00:02:00.915 [713/723] Linking target app/dpdk-test-security-perf 00:02:00.915 [714/723] Linking target app/dpdk-test-sad 00:02:00.915 [715/723] Linking target app/dpdk-test-crypto-perf 00:02:00.915 [716/723] Linking target app/dpdk-test-eventdev 00:02:00.915 [717/723] Linking target app/dpdk-test-bbdev 00:02:00.915 [718/723] Linking target app/dpdk-test-compress-perf 00:02:00.915 [719/723] Linking target app/dpdk-testpmd 00:02:01.479 [720/723] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.479 [721/723] Linking target lib/librte_vhost.so.24.2 00:02:02.851 [722/723] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.851 [723/723] Linking target lib/librte_pipeline.so.24.2 00:02:02.851 23:08:00 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:02.851 23:08:00 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:02.851 23:08:00 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:02.851 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:02.851 [0/1] Installing files. 00:02:03.114 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:03.114 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:03.114 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.115 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.116 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.117 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:03.118 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.119 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.120 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.120 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.120 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.120 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.120 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.120 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.120 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.120 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.120 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.121 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:03.689 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:03.689 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:03.689 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.689 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:03.689 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.689 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.689 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.689 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.689 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.689 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.689 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.689 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.689 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.689 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.689 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.690 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.690 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.690 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.690 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.690 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.690 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.690 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.690 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.690 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.690 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.691 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.692 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.693 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:03.694 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:03.694 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:03.694 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:03.694 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:03.694 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:03.694 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:02:03.694 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:02:03.694 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:03.694 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:03.694 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:03.694 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:03.694 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:03.694 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:03.694 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:03.694 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:03.694 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:03.694 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:03.694 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:03.694 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:03.694 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:03.694 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:03.694 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:03.694 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:03.694 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:03.694 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:03.694 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:03.694 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:03.694 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:03.694 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:03.694 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:03.695 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:03.695 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:03.695 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:03.695 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:03.695 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:03.695 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:03.695 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:03.695 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:03.695 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:03.695 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:03.695 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:03.695 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:03.695 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:03.695 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:03.695 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:03.695 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:03.695 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:03.695 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:03.695 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:03.695 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:03.695 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:03.695 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:03.695 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:03.695 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:03.695 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:03.695 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:03.695 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:03.695 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:03.695 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:03.695 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:03.695 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:03.695 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:03.695 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:03.695 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:03.695 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:03.695 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:03.695 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:03.695 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:03.695 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:03.695 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:03.695 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:03.695 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:03.695 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:03.695 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:03.695 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:03.695 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:03.695 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:03.695 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:03.695 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:03.695 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:03.695 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:03.695 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:03.695 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:03.695 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:03.695 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:03.695 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:03.695 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:03.695 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:03.695 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:03.695 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:03.695 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:03.695 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:03.695 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:03.695 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:03.695 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:03.695 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:03.695 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:03.695 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:03.695 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:03.695 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:03.695 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:03.695 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:03.695 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:03.695 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:03.695 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:03.695 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:03.695 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:03.695 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:03.695 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:03.695 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:03.695 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:03.695 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:03.695 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:03.696 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:03.696 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:03.696 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:02:03.696 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:02:03.696 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:02:03.696 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:02:03.696 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:02:03.696 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:02:03.696 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:02:03.696 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:02:03.696 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:02:03.696 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:02:03.696 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:02:03.696 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:02:03.696 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:02:03.696 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:02:03.696 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:02:03.696 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:02:03.696 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:02:03.696 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:02:03.696 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:02:03.696 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:02:03.696 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:02:03.696 23:08:01 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:02:03.696 23:08:01 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:03.696 00:02:03.696 real 0m40.409s 00:02:03.696 user 13m54.844s 00:02:03.696 sys 2m0.983s 00:02:03.696 23:08:01 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:03.696 23:08:01 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:03.696 ************************************ 00:02:03.696 END TEST build_native_dpdk 00:02:03.696 ************************************ 00:02:03.696 23:08:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:03.696 23:08:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:03.696 23:08:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:03.696 23:08:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:03.696 23:08:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:03.696 23:08:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:03.696 23:08:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:03.696 23:08:01 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:03.696 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:03.953 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.953 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.953 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:04.210 Using 'verbs' RDMA provider 00:02:14.745 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:22.851 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:23.109 Creating mk/config.mk...done. 00:02:23.109 Creating mk/cc.flags.mk...done. 00:02:23.109 Type 'make' to build. 00:02:23.109 23:08:20 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:23.109 23:08:20 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:23.109 23:08:20 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:23.109 23:08:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.109 ************************************ 00:02:23.109 START TEST make 00:02:23.109 ************************************ 00:02:23.109 23:08:20 make -- common/autotest_common.sh@1125 -- $ make -j48 00:02:23.368 make[1]: Nothing to be done for 'all'. 00:02:25.289 The Meson build system 00:02:25.289 Version: 1.3.1 00:02:25.289 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:25.289 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:25.289 Build type: native build 00:02:25.289 Project name: libvfio-user 00:02:25.289 Project version: 0.0.1 00:02:25.289 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:25.289 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:25.289 Host machine cpu family: x86_64 00:02:25.289 Host machine cpu: x86_64 00:02:25.289 Run-time dependency threads found: YES 00:02:25.289 Library dl found: YES 00:02:25.289 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:25.289 Run-time dependency json-c found: YES 0.17 00:02:25.289 Run-time dependency cmocka found: YES 1.1.7 00:02:25.289 Program pytest-3 found: NO 00:02:25.289 Program flake8 found: NO 00:02:25.289 Program misspell-fixer found: NO 00:02:25.289 Program restructuredtext-lint found: NO 00:02:25.289 Program valgrind found: YES (/usr/bin/valgrind) 00:02:25.289 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:25.289 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:25.289 Compiler for C supports arguments -Wwrite-strings: YES 00:02:25.289 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:25.289 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:25.289 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:25.289 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:25.289 Build targets in project: 8 00:02:25.289 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:25.289 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:25.289 00:02:25.289 libvfio-user 0.0.1 00:02:25.289 00:02:25.289 User defined options 00:02:25.289 buildtype : debug 00:02:25.289 default_library: shared 00:02:25.289 libdir : /usr/local/lib 00:02:25.289 00:02:25.289 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:25.880 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:25.880 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:25.880 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:26.162 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:26.162 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:26.162 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:26.162 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:26.162 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:26.162 [8/37] Compiling C object samples/null.p/null.c.o 00:02:26.163 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:26.163 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:26.163 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:26.163 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:26.163 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:26.163 [14/37] Compiling C object samples/server.p/server.c.o 00:02:26.163 [15/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:26.163 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:26.163 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:26.163 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:26.163 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:26.163 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:26.163 [21/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:26.163 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:26.163 [23/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:26.163 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:26.163 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:26.163 [26/37] Compiling C object samples/client.p/client.c.o 00:02:26.163 [27/37] Linking target samples/client 00:02:26.163 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:26.434 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:26.434 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:26.434 [31/37] Linking target test/unit_tests 00:02:26.434 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:26.700 [33/37] Linking target samples/null 00:02:26.700 [34/37] Linking target samples/server 00:02:26.700 [35/37] Linking target samples/gpio-pci-idio-16 00:02:26.700 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:26.700 [37/37] Linking target samples/lspci 00:02:26.700 INFO: autodetecting backend as ninja 00:02:26.700 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:26.700 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:27.274 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:27.275 ninja: no work to do. 00:02:39.475 CC lib/ut/ut.o 00:02:39.475 CC lib/log/log.o 00:02:39.475 CC lib/log/log_flags.o 00:02:39.475 CC lib/log/log_deprecated.o 00:02:39.475 CC lib/ut_mock/mock.o 00:02:39.475 LIB libspdk_log.a 00:02:39.475 LIB libspdk_ut_mock.a 00:02:39.475 LIB libspdk_ut.a 00:02:39.475 SO libspdk_ut_mock.so.6.0 00:02:39.475 SO libspdk_ut.so.2.0 00:02:39.475 SO libspdk_log.so.7.0 00:02:39.475 SYMLINK libspdk_ut.so 00:02:39.475 SYMLINK libspdk_ut_mock.so 00:02:39.475 SYMLINK libspdk_log.so 00:02:39.475 CC lib/dma/dma.o 00:02:39.475 CC lib/ioat/ioat.o 00:02:39.475 CC lib/util/base64.o 00:02:39.475 CXX lib/trace_parser/trace.o 00:02:39.475 CC lib/util/bit_array.o 00:02:39.475 CC lib/util/cpuset.o 00:02:39.475 CC lib/util/crc16.o 00:02:39.475 CC lib/util/crc32.o 00:02:39.475 CC lib/util/crc32c.o 00:02:39.475 CC lib/util/crc32_ieee.o 00:02:39.475 CC lib/util/crc64.o 00:02:39.475 CC lib/util/dif.o 00:02:39.475 CC lib/util/fd.o 00:02:39.475 CC lib/util/fd_group.o 00:02:39.475 CC lib/util/file.o 00:02:39.475 CC lib/util/hexlify.o 00:02:39.475 CC lib/util/iov.o 00:02:39.475 CC lib/util/math.o 00:02:39.475 CC lib/util/net.o 00:02:39.475 CC lib/util/pipe.o 00:02:39.475 CC lib/util/strerror_tls.o 00:02:39.475 CC lib/util/string.o 00:02:39.475 CC lib/util/uuid.o 00:02:39.475 CC lib/util/xor.o 00:02:39.475 CC lib/util/zipf.o 00:02:39.475 CC lib/vfio_user/host/vfio_user_pci.o 00:02:39.475 CC lib/vfio_user/host/vfio_user.o 00:02:39.475 LIB libspdk_dma.a 00:02:39.475 SO libspdk_dma.so.4.0 00:02:39.475 SYMLINK libspdk_dma.so 00:02:39.475 LIB libspdk_ioat.a 00:02:39.475 SO libspdk_ioat.so.7.0 00:02:39.475 SYMLINK libspdk_ioat.so 00:02:39.475 LIB libspdk_vfio_user.a 00:02:39.475 SO libspdk_vfio_user.so.5.0 00:02:39.475 SYMLINK libspdk_vfio_user.so 00:02:39.475 LIB libspdk_util.a 00:02:39.734 SO libspdk_util.so.10.0 00:02:39.734 SYMLINK libspdk_util.so 00:02:39.992 CC lib/json/json_parse.o 00:02:39.992 CC lib/rdma_utils/rdma_utils.o 00:02:39.992 CC lib/vmd/vmd.o 00:02:39.993 CC lib/idxd/idxd.o 00:02:39.993 CC lib/vmd/led.o 00:02:39.993 CC lib/conf/conf.o 00:02:39.993 CC lib/json/json_util.o 00:02:39.993 CC lib/env_dpdk/env.o 00:02:39.993 CC lib/rdma_provider/common.o 00:02:39.993 CC lib/idxd/idxd_user.o 00:02:39.993 CC lib/json/json_write.o 00:02:39.993 CC lib/env_dpdk/memory.o 00:02:39.993 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:39.993 CC lib/idxd/idxd_kernel.o 00:02:39.993 CC lib/env_dpdk/pci.o 00:02:39.993 CC lib/env_dpdk/init.o 00:02:39.993 CC lib/env_dpdk/threads.o 00:02:39.993 CC lib/env_dpdk/pci_ioat.o 00:02:39.993 CC lib/env_dpdk/pci_virtio.o 00:02:39.993 CC lib/env_dpdk/pci_vmd.o 00:02:39.993 CC lib/env_dpdk/pci_idxd.o 00:02:39.993 CC lib/env_dpdk/pci_event.o 00:02:39.993 CC lib/env_dpdk/sigbus_handler.o 00:02:39.993 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:39.993 CC lib/env_dpdk/pci_dpdk.o 00:02:39.993 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:39.993 LIB libspdk_trace_parser.a 00:02:39.993 SO libspdk_trace_parser.so.5.0 00:02:40.251 SYMLINK libspdk_trace_parser.so 00:02:40.251 LIB libspdk_rdma_provider.a 00:02:40.251 SO libspdk_rdma_provider.so.6.0 00:02:40.251 LIB libspdk_conf.a 00:02:40.251 SO libspdk_conf.so.6.0 00:02:40.251 SYMLINK libspdk_rdma_provider.so 00:02:40.251 LIB libspdk_rdma_utils.a 00:02:40.251 LIB libspdk_json.a 00:02:40.251 SYMLINK libspdk_conf.so 00:02:40.251 SO libspdk_rdma_utils.so.1.0 00:02:40.251 SO libspdk_json.so.6.0 00:02:40.251 SYMLINK libspdk_rdma_utils.so 00:02:40.510 SYMLINK libspdk_json.so 00:02:40.510 CC lib/jsonrpc/jsonrpc_server.o 00:02:40.510 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:40.510 CC lib/jsonrpc/jsonrpc_client.o 00:02:40.510 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:40.510 LIB libspdk_idxd.a 00:02:40.768 SO libspdk_idxd.so.12.0 00:02:40.768 SYMLINK libspdk_idxd.so 00:02:40.768 LIB libspdk_vmd.a 00:02:40.768 SO libspdk_vmd.so.6.0 00:02:40.768 SYMLINK libspdk_vmd.so 00:02:40.768 LIB libspdk_jsonrpc.a 00:02:40.768 SO libspdk_jsonrpc.so.6.0 00:02:41.026 SYMLINK libspdk_jsonrpc.so 00:02:41.026 CC lib/rpc/rpc.o 00:02:41.284 LIB libspdk_rpc.a 00:02:41.284 SO libspdk_rpc.so.6.0 00:02:41.284 SYMLINK libspdk_rpc.so 00:02:41.284 LIB libspdk_env_dpdk.a 00:02:41.543 SO libspdk_env_dpdk.so.15.0 00:02:41.543 CC lib/trace/trace.o 00:02:41.543 CC lib/keyring/keyring.o 00:02:41.543 CC lib/trace/trace_flags.o 00:02:41.543 CC lib/keyring/keyring_rpc.o 00:02:41.543 CC lib/trace/trace_rpc.o 00:02:41.543 CC lib/notify/notify.o 00:02:41.543 CC lib/notify/notify_rpc.o 00:02:41.543 SYMLINK libspdk_env_dpdk.so 00:02:41.802 LIB libspdk_notify.a 00:02:41.802 SO libspdk_notify.so.6.0 00:02:41.802 LIB libspdk_keyring.a 00:02:41.802 SYMLINK libspdk_notify.so 00:02:41.802 LIB libspdk_trace.a 00:02:41.802 SO libspdk_keyring.so.1.0 00:02:41.802 SO libspdk_trace.so.10.0 00:02:41.802 SYMLINK libspdk_keyring.so 00:02:41.802 SYMLINK libspdk_trace.so 00:02:42.060 CC lib/thread/thread.o 00:02:42.060 CC lib/thread/iobuf.o 00:02:42.060 CC lib/sock/sock.o 00:02:42.060 CC lib/sock/sock_rpc.o 00:02:42.626 LIB libspdk_sock.a 00:02:42.626 SO libspdk_sock.so.10.0 00:02:42.626 SYMLINK libspdk_sock.so 00:02:42.626 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:42.626 CC lib/nvme/nvme_ctrlr.o 00:02:42.626 CC lib/nvme/nvme_fabric.o 00:02:42.626 CC lib/nvme/nvme_ns_cmd.o 00:02:42.626 CC lib/nvme/nvme_ns.o 00:02:42.626 CC lib/nvme/nvme_pcie_common.o 00:02:42.626 CC lib/nvme/nvme_pcie.o 00:02:42.626 CC lib/nvme/nvme_qpair.o 00:02:42.626 CC lib/nvme/nvme.o 00:02:42.626 CC lib/nvme/nvme_quirks.o 00:02:42.626 CC lib/nvme/nvme_transport.o 00:02:42.626 CC lib/nvme/nvme_discovery.o 00:02:42.626 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:42.626 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:42.626 CC lib/nvme/nvme_tcp.o 00:02:42.626 CC lib/nvme/nvme_opal.o 00:02:42.626 CC lib/nvme/nvme_io_msg.o 00:02:42.626 CC lib/nvme/nvme_poll_group.o 00:02:42.626 CC lib/nvme/nvme_zns.o 00:02:42.626 CC lib/nvme/nvme_stubs.o 00:02:42.626 CC lib/nvme/nvme_auth.o 00:02:42.626 CC lib/nvme/nvme_cuse.o 00:02:42.626 CC lib/nvme/nvme_vfio_user.o 00:02:42.626 CC lib/nvme/nvme_rdma.o 00:02:43.562 LIB libspdk_thread.a 00:02:43.820 SO libspdk_thread.so.10.1 00:02:43.820 SYMLINK libspdk_thread.so 00:02:43.820 CC lib/vfu_tgt/tgt_endpoint.o 00:02:43.820 CC lib/virtio/virtio.o 00:02:43.820 CC lib/vfu_tgt/tgt_rpc.o 00:02:43.820 CC lib/virtio/virtio_vhost_user.o 00:02:43.820 CC lib/accel/accel.o 00:02:43.820 CC lib/virtio/virtio_vfio_user.o 00:02:43.820 CC lib/accel/accel_rpc.o 00:02:43.820 CC lib/virtio/virtio_pci.o 00:02:43.820 CC lib/blob/blobstore.o 00:02:43.820 CC lib/accel/accel_sw.o 00:02:43.820 CC lib/init/json_config.o 00:02:43.820 CC lib/blob/request.o 00:02:43.820 CC lib/init/subsystem.o 00:02:43.820 CC lib/blob/zeroes.o 00:02:43.820 CC lib/init/subsystem_rpc.o 00:02:43.820 CC lib/blob/blob_bs_dev.o 00:02:43.820 CC lib/init/rpc.o 00:02:44.387 LIB libspdk_init.a 00:02:44.387 SO libspdk_init.so.5.0 00:02:44.387 LIB libspdk_vfu_tgt.a 00:02:44.387 LIB libspdk_virtio.a 00:02:44.387 SYMLINK libspdk_init.so 00:02:44.387 SO libspdk_vfu_tgt.so.3.0 00:02:44.387 SO libspdk_virtio.so.7.0 00:02:44.387 SYMLINK libspdk_vfu_tgt.so 00:02:44.387 SYMLINK libspdk_virtio.so 00:02:44.387 CC lib/event/app.o 00:02:44.387 CC lib/event/reactor.o 00:02:44.387 CC lib/event/log_rpc.o 00:02:44.387 CC lib/event/app_rpc.o 00:02:44.387 CC lib/event/scheduler_static.o 00:02:44.954 LIB libspdk_event.a 00:02:44.954 SO libspdk_event.so.14.0 00:02:44.954 SYMLINK libspdk_event.so 00:02:44.954 LIB libspdk_accel.a 00:02:44.954 SO libspdk_accel.so.16.0 00:02:45.211 LIB libspdk_nvme.a 00:02:45.211 SYMLINK libspdk_accel.so 00:02:45.211 SO libspdk_nvme.so.13.1 00:02:45.211 CC lib/bdev/bdev.o 00:02:45.211 CC lib/bdev/bdev_rpc.o 00:02:45.211 CC lib/bdev/bdev_zone.o 00:02:45.211 CC lib/bdev/part.o 00:02:45.211 CC lib/bdev/scsi_nvme.o 00:02:45.469 SYMLINK libspdk_nvme.so 00:02:47.369 LIB libspdk_blob.a 00:02:47.369 SO libspdk_blob.so.11.0 00:02:47.369 SYMLINK libspdk_blob.so 00:02:47.369 CC lib/blobfs/blobfs.o 00:02:47.369 CC lib/blobfs/tree.o 00:02:47.369 CC lib/lvol/lvol.o 00:02:47.935 LIB libspdk_bdev.a 00:02:47.935 SO libspdk_bdev.so.16.0 00:02:48.201 SYMLINK libspdk_bdev.so 00:02:48.201 LIB libspdk_blobfs.a 00:02:48.201 LIB libspdk_lvol.a 00:02:48.201 SO libspdk_blobfs.so.10.0 00:02:48.201 SO libspdk_lvol.so.10.0 00:02:48.201 CC lib/scsi/dev.o 00:02:48.201 CC lib/nbd/nbd.o 00:02:48.201 CC lib/scsi/lun.o 00:02:48.201 CC lib/nbd/nbd_rpc.o 00:02:48.201 CC lib/ftl/ftl_core.o 00:02:48.201 CC lib/scsi/port.o 00:02:48.201 CC lib/ftl/ftl_init.o 00:02:48.201 CC lib/scsi/scsi.o 00:02:48.201 CC lib/ublk/ublk.o 00:02:48.201 CC lib/scsi/scsi_bdev.o 00:02:48.201 CC lib/ftl/ftl_layout.o 00:02:48.201 CC lib/nvmf/ctrlr.o 00:02:48.201 CC lib/scsi/scsi_pr.o 00:02:48.201 CC lib/ftl/ftl_debug.o 00:02:48.201 CC lib/ublk/ublk_rpc.o 00:02:48.201 CC lib/scsi/scsi_rpc.o 00:02:48.201 CC lib/nvmf/ctrlr_discovery.o 00:02:48.201 CC lib/ftl/ftl_io.o 00:02:48.201 CC lib/nvmf/ctrlr_bdev.o 00:02:48.201 CC lib/scsi/task.o 00:02:48.201 CC lib/ftl/ftl_l2p.o 00:02:48.201 CC lib/ftl/ftl_sb.o 00:02:48.201 CC lib/nvmf/subsystem.o 00:02:48.201 CC lib/ftl/ftl_l2p_flat.o 00:02:48.201 CC lib/nvmf/nvmf.o 00:02:48.201 CC lib/nvmf/nvmf_rpc.o 00:02:48.201 CC lib/ftl/ftl_nv_cache.o 00:02:48.201 CC lib/nvmf/transport.o 00:02:48.201 CC lib/ftl/ftl_band.o 00:02:48.201 CC lib/ftl/ftl_band_ops.o 00:02:48.201 CC lib/nvmf/tcp.o 00:02:48.201 CC lib/nvmf/stubs.o 00:02:48.201 CC lib/ftl/ftl_writer.o 00:02:48.201 CC lib/ftl/ftl_rq.o 00:02:48.201 CC lib/nvmf/mdns_server.o 00:02:48.201 CC lib/nvmf/vfio_user.o 00:02:48.201 CC lib/ftl/ftl_reloc.o 00:02:48.201 CC lib/ftl/ftl_l2p_cache.o 00:02:48.201 CC lib/nvmf/rdma.o 00:02:48.201 CC lib/ftl/ftl_p2l.o 00:02:48.201 CC lib/nvmf/auth.o 00:02:48.201 SYMLINK libspdk_blobfs.so 00:02:48.201 CC lib/ftl/mngt/ftl_mngt.o 00:02:48.201 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:48.201 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:48.201 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:48.201 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:48.201 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:48.201 SYMLINK libspdk_lvol.so 00:02:48.201 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:48.462 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:48.727 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:48.727 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:48.727 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:48.727 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:48.727 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:48.727 CC lib/ftl/utils/ftl_conf.o 00:02:48.727 CC lib/ftl/utils/ftl_md.o 00:02:48.727 CC lib/ftl/utils/ftl_mempool.o 00:02:48.727 CC lib/ftl/utils/ftl_bitmap.o 00:02:48.727 CC lib/ftl/utils/ftl_property.o 00:02:48.727 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:48.727 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:48.727 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:48.727 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:48.727 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:48.727 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:48.727 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:48.727 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:48.727 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:48.986 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:48.986 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:48.986 CC lib/ftl/base/ftl_base_dev.o 00:02:48.986 CC lib/ftl/base/ftl_base_bdev.o 00:02:48.986 CC lib/ftl/ftl_trace.o 00:02:48.986 LIB libspdk_nbd.a 00:02:48.986 SO libspdk_nbd.so.7.0 00:02:49.245 LIB libspdk_scsi.a 00:02:49.245 SYMLINK libspdk_nbd.so 00:02:49.245 SO libspdk_scsi.so.9.0 00:02:49.245 SYMLINK libspdk_scsi.so 00:02:49.245 LIB libspdk_ublk.a 00:02:49.504 SO libspdk_ublk.so.3.0 00:02:49.504 SYMLINK libspdk_ublk.so 00:02:49.504 CC lib/vhost/vhost.o 00:02:49.504 CC lib/iscsi/conn.o 00:02:49.504 CC lib/vhost/vhost_rpc.o 00:02:49.504 CC lib/iscsi/init_grp.o 00:02:49.504 CC lib/vhost/vhost_scsi.o 00:02:49.504 CC lib/iscsi/iscsi.o 00:02:49.504 CC lib/vhost/vhost_blk.o 00:02:49.504 CC lib/iscsi/md5.o 00:02:49.504 CC lib/vhost/rte_vhost_user.o 00:02:49.504 CC lib/iscsi/param.o 00:02:49.504 CC lib/iscsi/portal_grp.o 00:02:49.504 CC lib/iscsi/tgt_node.o 00:02:49.504 CC lib/iscsi/iscsi_subsystem.o 00:02:49.504 CC lib/iscsi/task.o 00:02:49.504 CC lib/iscsi/iscsi_rpc.o 00:02:49.762 LIB libspdk_ftl.a 00:02:50.021 SO libspdk_ftl.so.9.0 00:02:50.278 SYMLINK libspdk_ftl.so 00:02:50.845 LIB libspdk_vhost.a 00:02:50.845 SO libspdk_vhost.so.8.0 00:02:50.845 LIB libspdk_nvmf.a 00:02:50.845 SYMLINK libspdk_vhost.so 00:02:50.845 SO libspdk_nvmf.so.19.0 00:02:50.845 LIB libspdk_iscsi.a 00:02:50.845 SO libspdk_iscsi.so.8.0 00:02:51.103 SYMLINK libspdk_nvmf.so 00:02:51.103 SYMLINK libspdk_iscsi.so 00:02:51.362 CC module/vfu_device/vfu_virtio.o 00:02:51.362 CC module/env_dpdk/env_dpdk_rpc.o 00:02:51.362 CC module/vfu_device/vfu_virtio_blk.o 00:02:51.362 CC module/vfu_device/vfu_virtio_scsi.o 00:02:51.362 CC module/vfu_device/vfu_virtio_rpc.o 00:02:51.362 CC module/accel/dsa/accel_dsa.o 00:02:51.362 CC module/accel/dsa/accel_dsa_rpc.o 00:02:51.362 CC module/scheduler/gscheduler/gscheduler.o 00:02:51.362 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:51.362 CC module/keyring/linux/keyring.o 00:02:51.362 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:51.362 CC module/keyring/file/keyring.o 00:02:51.362 CC module/accel/error/accel_error.o 00:02:51.362 CC module/keyring/file/keyring_rpc.o 00:02:51.362 CC module/accel/error/accel_error_rpc.o 00:02:51.362 CC module/keyring/linux/keyring_rpc.o 00:02:51.362 CC module/blob/bdev/blob_bdev.o 00:02:51.362 CC module/accel/ioat/accel_ioat.o 00:02:51.362 CC module/sock/posix/posix.o 00:02:51.362 CC module/accel/ioat/accel_ioat_rpc.o 00:02:51.362 CC module/accel/iaa/accel_iaa.o 00:02:51.362 CC module/accel/iaa/accel_iaa_rpc.o 00:02:51.620 LIB libspdk_env_dpdk_rpc.a 00:02:51.620 SO libspdk_env_dpdk_rpc.so.6.0 00:02:51.620 SYMLINK libspdk_env_dpdk_rpc.so 00:02:51.620 LIB libspdk_keyring_linux.a 00:02:51.620 LIB libspdk_keyring_file.a 00:02:51.620 LIB libspdk_scheduler_dpdk_governor.a 00:02:51.620 LIB libspdk_scheduler_gscheduler.a 00:02:51.620 SO libspdk_keyring_linux.so.1.0 00:02:51.620 SO libspdk_keyring_file.so.1.0 00:02:51.620 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:51.620 SO libspdk_scheduler_gscheduler.so.4.0 00:02:51.620 LIB libspdk_accel_error.a 00:02:51.620 LIB libspdk_accel_ioat.a 00:02:51.620 LIB libspdk_scheduler_dynamic.a 00:02:51.620 SO libspdk_accel_error.so.2.0 00:02:51.620 LIB libspdk_accel_iaa.a 00:02:51.620 SO libspdk_scheduler_dynamic.so.4.0 00:02:51.620 SO libspdk_accel_ioat.so.6.0 00:02:51.620 SYMLINK libspdk_keyring_linux.so 00:02:51.620 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:51.620 SYMLINK libspdk_scheduler_gscheduler.so 00:02:51.620 SYMLINK libspdk_keyring_file.so 00:02:51.620 SO libspdk_accel_iaa.so.3.0 00:02:51.620 LIB libspdk_accel_dsa.a 00:02:51.620 SYMLINK libspdk_accel_error.so 00:02:51.620 LIB libspdk_blob_bdev.a 00:02:51.620 SYMLINK libspdk_scheduler_dynamic.so 00:02:51.620 SYMLINK libspdk_accel_ioat.so 00:02:51.878 SO libspdk_accel_dsa.so.5.0 00:02:51.878 SYMLINK libspdk_accel_iaa.so 00:02:51.878 SO libspdk_blob_bdev.so.11.0 00:02:51.878 SYMLINK libspdk_blob_bdev.so 00:02:51.878 SYMLINK libspdk_accel_dsa.so 00:02:52.215 LIB libspdk_vfu_device.a 00:02:52.215 CC module/bdev/error/vbdev_error.o 00:02:52.215 CC module/bdev/delay/vbdev_delay.o 00:02:52.215 SO libspdk_vfu_device.so.3.0 00:02:52.215 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:52.215 CC module/bdev/gpt/gpt.o 00:02:52.215 CC module/bdev/error/vbdev_error_rpc.o 00:02:52.215 CC module/bdev/gpt/vbdev_gpt.o 00:02:52.215 CC module/blobfs/bdev/blobfs_bdev.o 00:02:52.215 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:52.215 CC module/bdev/nvme/bdev_nvme.o 00:02:52.215 CC module/bdev/malloc/bdev_malloc.o 00:02:52.215 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:52.215 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:52.215 CC module/bdev/aio/bdev_aio.o 00:02:52.215 CC module/bdev/iscsi/bdev_iscsi.o 00:02:52.215 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:52.215 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:52.215 CC module/bdev/aio/bdev_aio_rpc.o 00:02:52.215 CC module/bdev/lvol/vbdev_lvol.o 00:02:52.215 CC module/bdev/ftl/bdev_ftl.o 00:02:52.215 CC module/bdev/split/vbdev_split.o 00:02:52.215 CC module/bdev/passthru/vbdev_passthru.o 00:02:52.215 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:52.215 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:52.215 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:52.215 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:52.215 CC module/bdev/split/vbdev_split_rpc.o 00:02:52.215 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:52.215 CC module/bdev/raid/bdev_raid.o 00:02:52.215 CC module/bdev/nvme/nvme_rpc.o 00:02:52.215 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:52.215 CC module/bdev/null/bdev_null.o 00:02:52.215 CC module/bdev/raid/bdev_raid_rpc.o 00:02:52.215 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:52.215 CC module/bdev/nvme/bdev_mdns_client.o 00:02:52.215 CC module/bdev/raid/bdev_raid_sb.o 00:02:52.215 CC module/bdev/null/bdev_null_rpc.o 00:02:52.215 CC module/bdev/nvme/vbdev_opal.o 00:02:52.215 CC module/bdev/raid/raid0.o 00:02:52.215 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:52.215 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:52.215 CC module/bdev/raid/raid1.o 00:02:52.215 CC module/bdev/raid/concat.o 00:02:52.215 SYMLINK libspdk_vfu_device.so 00:02:52.483 LIB libspdk_sock_posix.a 00:02:52.483 SO libspdk_sock_posix.so.6.0 00:02:52.483 SYMLINK libspdk_sock_posix.so 00:02:52.483 LIB libspdk_blobfs_bdev.a 00:02:52.483 LIB libspdk_bdev_error.a 00:02:52.483 SO libspdk_blobfs_bdev.so.6.0 00:02:52.483 SO libspdk_bdev_error.so.6.0 00:02:52.483 LIB libspdk_bdev_ftl.a 00:02:52.483 LIB libspdk_bdev_split.a 00:02:52.483 SO libspdk_bdev_ftl.so.6.0 00:02:52.483 LIB libspdk_bdev_passthru.a 00:02:52.483 LIB libspdk_bdev_null.a 00:02:52.483 SYMLINK libspdk_blobfs_bdev.so 00:02:52.483 SO libspdk_bdev_split.so.6.0 00:02:52.483 SYMLINK libspdk_bdev_error.so 00:02:52.483 SO libspdk_bdev_null.so.6.0 00:02:52.483 SO libspdk_bdev_passthru.so.6.0 00:02:52.483 LIB libspdk_bdev_gpt.a 00:02:52.483 SYMLINK libspdk_bdev_ftl.so 00:02:52.483 SO libspdk_bdev_gpt.so.6.0 00:02:52.741 SYMLINK libspdk_bdev_split.so 00:02:52.741 SYMLINK libspdk_bdev_passthru.so 00:02:52.741 SYMLINK libspdk_bdev_null.so 00:02:52.741 LIB libspdk_bdev_zone_block.a 00:02:52.741 LIB libspdk_bdev_iscsi.a 00:02:52.741 SO libspdk_bdev_zone_block.so.6.0 00:02:52.741 SYMLINK libspdk_bdev_gpt.so 00:02:52.741 SO libspdk_bdev_iscsi.so.6.0 00:02:52.741 LIB libspdk_bdev_aio.a 00:02:52.741 LIB libspdk_bdev_delay.a 00:02:52.741 LIB libspdk_bdev_malloc.a 00:02:52.741 SO libspdk_bdev_aio.so.6.0 00:02:52.741 SO libspdk_bdev_delay.so.6.0 00:02:52.741 SYMLINK libspdk_bdev_zone_block.so 00:02:52.741 SYMLINK libspdk_bdev_iscsi.so 00:02:52.741 SO libspdk_bdev_malloc.so.6.0 00:02:52.741 SYMLINK libspdk_bdev_aio.so 00:02:52.741 SYMLINK libspdk_bdev_delay.so 00:02:52.741 LIB libspdk_bdev_lvol.a 00:02:52.741 SYMLINK libspdk_bdev_malloc.so 00:02:52.741 SO libspdk_bdev_lvol.so.6.0 00:02:52.741 LIB libspdk_bdev_virtio.a 00:02:53.000 SYMLINK libspdk_bdev_lvol.so 00:02:53.000 SO libspdk_bdev_virtio.so.6.0 00:02:53.000 SYMLINK libspdk_bdev_virtio.so 00:02:53.257 LIB libspdk_bdev_raid.a 00:02:53.257 SO libspdk_bdev_raid.so.6.0 00:02:53.516 SYMLINK libspdk_bdev_raid.so 00:02:54.471 LIB libspdk_bdev_nvme.a 00:02:54.471 SO libspdk_bdev_nvme.so.7.0 00:02:54.471 SYMLINK libspdk_bdev_nvme.so 00:02:55.039 CC module/event/subsystems/iobuf/iobuf.o 00:02:55.039 CC module/event/subsystems/vmd/vmd.o 00:02:55.039 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:55.039 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:55.039 CC module/event/subsystems/keyring/keyring.o 00:02:55.039 CC module/event/subsystems/sock/sock.o 00:02:55.039 CC module/event/subsystems/scheduler/scheduler.o 00:02:55.039 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:55.039 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:55.039 LIB libspdk_event_keyring.a 00:02:55.039 LIB libspdk_event_vhost_blk.a 00:02:55.039 LIB libspdk_event_vfu_tgt.a 00:02:55.039 LIB libspdk_event_scheduler.a 00:02:55.039 LIB libspdk_event_vmd.a 00:02:55.039 LIB libspdk_event_sock.a 00:02:55.039 SO libspdk_event_keyring.so.1.0 00:02:55.039 SO libspdk_event_vhost_blk.so.3.0 00:02:55.039 LIB libspdk_event_iobuf.a 00:02:55.039 SO libspdk_event_vfu_tgt.so.3.0 00:02:55.039 SO libspdk_event_scheduler.so.4.0 00:02:55.039 SO libspdk_event_vmd.so.6.0 00:02:55.039 SO libspdk_event_sock.so.5.0 00:02:55.039 SO libspdk_event_iobuf.so.3.0 00:02:55.039 SYMLINK libspdk_event_keyring.so 00:02:55.039 SYMLINK libspdk_event_vhost_blk.so 00:02:55.039 SYMLINK libspdk_event_vfu_tgt.so 00:02:55.039 SYMLINK libspdk_event_scheduler.so 00:02:55.039 SYMLINK libspdk_event_sock.so 00:02:55.039 SYMLINK libspdk_event_vmd.so 00:02:55.298 SYMLINK libspdk_event_iobuf.so 00:02:55.298 CC module/event/subsystems/accel/accel.o 00:02:55.558 LIB libspdk_event_accel.a 00:02:55.558 SO libspdk_event_accel.so.6.0 00:02:55.558 SYMLINK libspdk_event_accel.so 00:02:55.815 CC module/event/subsystems/bdev/bdev.o 00:02:55.815 LIB libspdk_event_bdev.a 00:02:55.815 SO libspdk_event_bdev.so.6.0 00:02:56.072 SYMLINK libspdk_event_bdev.so 00:02:56.072 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:56.072 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:56.072 CC module/event/subsystems/ublk/ublk.o 00:02:56.072 CC module/event/subsystems/nbd/nbd.o 00:02:56.072 CC module/event/subsystems/scsi/scsi.o 00:02:56.331 LIB libspdk_event_nbd.a 00:02:56.331 LIB libspdk_event_ublk.a 00:02:56.331 LIB libspdk_event_scsi.a 00:02:56.331 SO libspdk_event_ublk.so.3.0 00:02:56.331 SO libspdk_event_nbd.so.6.0 00:02:56.331 SO libspdk_event_scsi.so.6.0 00:02:56.331 SYMLINK libspdk_event_ublk.so 00:02:56.331 SYMLINK libspdk_event_nbd.so 00:02:56.331 SYMLINK libspdk_event_scsi.so 00:02:56.331 LIB libspdk_event_nvmf.a 00:02:56.331 SO libspdk_event_nvmf.so.6.0 00:02:56.589 SYMLINK libspdk_event_nvmf.so 00:02:56.589 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:56.589 CC module/event/subsystems/iscsi/iscsi.o 00:02:56.589 LIB libspdk_event_vhost_scsi.a 00:02:56.589 LIB libspdk_event_iscsi.a 00:02:56.589 SO libspdk_event_vhost_scsi.so.3.0 00:02:56.847 SO libspdk_event_iscsi.so.6.0 00:02:56.847 SYMLINK libspdk_event_vhost_scsi.so 00:02:56.847 SYMLINK libspdk_event_iscsi.so 00:02:56.847 SO libspdk.so.6.0 00:02:56.847 SYMLINK libspdk.so 00:02:57.111 CC app/trace_record/trace_record.o 00:02:57.111 CC app/spdk_lspci/spdk_lspci.o 00:02:57.111 CC app/spdk_top/spdk_top.o 00:02:57.111 CC app/spdk_nvme_perf/perf.o 00:02:57.111 CC app/spdk_nvme_identify/identify.o 00:02:57.111 CXX app/trace/trace.o 00:02:57.111 TEST_HEADER include/spdk/accel.h 00:02:57.111 TEST_HEADER include/spdk/accel_module.h 00:02:57.111 CC app/spdk_nvme_discover/discovery_aer.o 00:02:57.111 TEST_HEADER include/spdk/assert.h 00:02:57.111 TEST_HEADER include/spdk/barrier.h 00:02:57.111 CC test/rpc_client/rpc_client_test.o 00:02:57.111 TEST_HEADER include/spdk/base64.h 00:02:57.111 TEST_HEADER include/spdk/bdev.h 00:02:57.111 TEST_HEADER include/spdk/bdev_module.h 00:02:57.111 TEST_HEADER include/spdk/bdev_zone.h 00:02:57.111 TEST_HEADER include/spdk/bit_array.h 00:02:57.111 TEST_HEADER include/spdk/bit_pool.h 00:02:57.111 TEST_HEADER include/spdk/blob_bdev.h 00:02:57.111 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:57.111 TEST_HEADER include/spdk/blobfs.h 00:02:57.111 TEST_HEADER include/spdk/blob.h 00:02:57.111 TEST_HEADER include/spdk/conf.h 00:02:57.111 TEST_HEADER include/spdk/config.h 00:02:57.111 TEST_HEADER include/spdk/cpuset.h 00:02:57.111 TEST_HEADER include/spdk/crc16.h 00:02:57.111 TEST_HEADER include/spdk/crc32.h 00:02:57.111 TEST_HEADER include/spdk/crc64.h 00:02:57.111 TEST_HEADER include/spdk/dif.h 00:02:57.111 TEST_HEADER include/spdk/dma.h 00:02:57.111 TEST_HEADER include/spdk/endian.h 00:02:57.111 TEST_HEADER include/spdk/env_dpdk.h 00:02:57.111 TEST_HEADER include/spdk/event.h 00:02:57.111 TEST_HEADER include/spdk/env.h 00:02:57.111 TEST_HEADER include/spdk/fd_group.h 00:02:57.111 TEST_HEADER include/spdk/fd.h 00:02:57.111 TEST_HEADER include/spdk/file.h 00:02:57.111 TEST_HEADER include/spdk/ftl.h 00:02:57.111 TEST_HEADER include/spdk/hexlify.h 00:02:57.111 TEST_HEADER include/spdk/gpt_spec.h 00:02:57.111 TEST_HEADER include/spdk/histogram_data.h 00:02:57.111 TEST_HEADER include/spdk/idxd.h 00:02:57.111 TEST_HEADER include/spdk/idxd_spec.h 00:02:57.111 TEST_HEADER include/spdk/init.h 00:02:57.112 TEST_HEADER include/spdk/ioat.h 00:02:57.112 TEST_HEADER include/spdk/ioat_spec.h 00:02:57.112 TEST_HEADER include/spdk/iscsi_spec.h 00:02:57.112 TEST_HEADER include/spdk/jsonrpc.h 00:02:57.112 TEST_HEADER include/spdk/json.h 00:02:57.112 TEST_HEADER include/spdk/keyring.h 00:02:57.112 TEST_HEADER include/spdk/keyring_module.h 00:02:57.112 TEST_HEADER include/spdk/likely.h 00:02:57.112 TEST_HEADER include/spdk/log.h 00:02:57.112 TEST_HEADER include/spdk/lvol.h 00:02:57.112 TEST_HEADER include/spdk/memory.h 00:02:57.112 TEST_HEADER include/spdk/mmio.h 00:02:57.112 TEST_HEADER include/spdk/nbd.h 00:02:57.112 TEST_HEADER include/spdk/net.h 00:02:57.112 TEST_HEADER include/spdk/notify.h 00:02:57.112 TEST_HEADER include/spdk/nvme.h 00:02:57.112 TEST_HEADER include/spdk/nvme_intel.h 00:02:57.112 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:57.112 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:57.112 TEST_HEADER include/spdk/nvme_spec.h 00:02:57.112 TEST_HEADER include/spdk/nvme_zns.h 00:02:57.112 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:57.112 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:57.112 TEST_HEADER include/spdk/nvmf.h 00:02:57.112 TEST_HEADER include/spdk/nvmf_spec.h 00:02:57.112 TEST_HEADER include/spdk/nvmf_transport.h 00:02:57.112 TEST_HEADER include/spdk/opal.h 00:02:57.112 TEST_HEADER include/spdk/opal_spec.h 00:02:57.112 TEST_HEADER include/spdk/pci_ids.h 00:02:57.112 TEST_HEADER include/spdk/pipe.h 00:02:57.112 TEST_HEADER include/spdk/queue.h 00:02:57.112 TEST_HEADER include/spdk/rpc.h 00:02:57.112 TEST_HEADER include/spdk/reduce.h 00:02:57.112 TEST_HEADER include/spdk/scheduler.h 00:02:57.112 TEST_HEADER include/spdk/scsi.h 00:02:57.112 TEST_HEADER include/spdk/scsi_spec.h 00:02:57.112 TEST_HEADER include/spdk/sock.h 00:02:57.112 TEST_HEADER include/spdk/stdinc.h 00:02:57.112 TEST_HEADER include/spdk/string.h 00:02:57.112 TEST_HEADER include/spdk/thread.h 00:02:57.112 TEST_HEADER include/spdk/trace.h 00:02:57.112 TEST_HEADER include/spdk/trace_parser.h 00:02:57.112 TEST_HEADER include/spdk/tree.h 00:02:57.112 TEST_HEADER include/spdk/ublk.h 00:02:57.112 TEST_HEADER include/spdk/util.h 00:02:57.112 TEST_HEADER include/spdk/uuid.h 00:02:57.112 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:57.112 TEST_HEADER include/spdk/version.h 00:02:57.112 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:57.112 TEST_HEADER include/spdk/vhost.h 00:02:57.112 TEST_HEADER include/spdk/vmd.h 00:02:57.112 TEST_HEADER include/spdk/xor.h 00:02:57.112 TEST_HEADER include/spdk/zipf.h 00:02:57.112 CC app/spdk_dd/spdk_dd.o 00:02:57.112 CXX test/cpp_headers/accel_module.o 00:02:57.112 CXX test/cpp_headers/accel.o 00:02:57.112 CXX test/cpp_headers/assert.o 00:02:57.112 CXX test/cpp_headers/barrier.o 00:02:57.112 CXX test/cpp_headers/base64.o 00:02:57.112 CXX test/cpp_headers/bdev.o 00:02:57.112 CC app/iscsi_tgt/iscsi_tgt.o 00:02:57.112 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:57.112 CXX test/cpp_headers/bdev_module.o 00:02:57.112 CXX test/cpp_headers/bdev_zone.o 00:02:57.112 CXX test/cpp_headers/bit_array.o 00:02:57.112 CXX test/cpp_headers/bit_pool.o 00:02:57.112 CXX test/cpp_headers/blob_bdev.o 00:02:57.112 CXX test/cpp_headers/blobfs_bdev.o 00:02:57.112 CXX test/cpp_headers/blobfs.o 00:02:57.112 CXX test/cpp_headers/blob.o 00:02:57.112 CXX test/cpp_headers/conf.o 00:02:57.112 CXX test/cpp_headers/config.o 00:02:57.112 CXX test/cpp_headers/cpuset.o 00:02:57.112 CXX test/cpp_headers/crc16.o 00:02:57.112 CC app/nvmf_tgt/nvmf_main.o 00:02:57.112 CC app/spdk_tgt/spdk_tgt.o 00:02:57.112 CXX test/cpp_headers/crc32.o 00:02:57.112 CC test/thread/poller_perf/poller_perf.o 00:02:57.112 CC examples/ioat/perf/perf.o 00:02:57.112 CC examples/ioat/verify/verify.o 00:02:57.112 CC test/app/jsoncat/jsoncat.o 00:02:57.112 CC examples/util/zipf/zipf.o 00:02:57.112 CC test/env/vtophys/vtophys.o 00:02:57.112 CC test/env/memory/memory_ut.o 00:02:57.112 CC test/app/histogram_perf/histogram_perf.o 00:02:57.112 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:57.112 CC test/app/stub/stub.o 00:02:57.112 CC test/env/pci/pci_ut.o 00:02:57.112 CC app/fio/nvme/fio_plugin.o 00:02:57.371 CC test/dma/test_dma/test_dma.o 00:02:57.371 CC test/app/bdev_svc/bdev_svc.o 00:02:57.371 CC app/fio/bdev/fio_plugin.o 00:02:57.371 LINK spdk_lspci 00:02:57.371 CC test/env/mem_callbacks/mem_callbacks.o 00:02:57.371 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:57.371 LINK rpc_client_test 00:02:57.371 LINK spdk_nvme_discover 00:02:57.633 LINK jsoncat 00:02:57.633 CXX test/cpp_headers/crc64.o 00:02:57.633 LINK poller_perf 00:02:57.633 LINK nvmf_tgt 00:02:57.633 CXX test/cpp_headers/dif.o 00:02:57.633 LINK histogram_perf 00:02:57.633 LINK zipf 00:02:57.633 CXX test/cpp_headers/dma.o 00:02:57.633 CXX test/cpp_headers/endian.o 00:02:57.633 LINK interrupt_tgt 00:02:57.633 CXX test/cpp_headers/env_dpdk.o 00:02:57.633 LINK env_dpdk_post_init 00:02:57.633 LINK vtophys 00:02:57.633 CXX test/cpp_headers/env.o 00:02:57.633 CXX test/cpp_headers/event.o 00:02:57.633 CXX test/cpp_headers/fd_group.o 00:02:57.633 CXX test/cpp_headers/fd.o 00:02:57.633 CXX test/cpp_headers/file.o 00:02:57.633 CXX test/cpp_headers/ftl.o 00:02:57.633 LINK spdk_trace_record 00:02:57.633 LINK iscsi_tgt 00:02:57.633 LINK stub 00:02:57.633 CXX test/cpp_headers/gpt_spec.o 00:02:57.633 CXX test/cpp_headers/hexlify.o 00:02:57.633 CXX test/cpp_headers/histogram_data.o 00:02:57.633 LINK spdk_tgt 00:02:57.633 LINK verify 00:02:57.633 CXX test/cpp_headers/idxd.o 00:02:57.633 LINK ioat_perf 00:02:57.633 CXX test/cpp_headers/idxd_spec.o 00:02:57.633 CXX test/cpp_headers/init.o 00:02:57.898 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:57.898 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:57.898 CXX test/cpp_headers/ioat.o 00:02:57.898 LINK bdev_svc 00:02:57.898 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:57.898 CXX test/cpp_headers/ioat_spec.o 00:02:57.898 CXX test/cpp_headers/iscsi_spec.o 00:02:57.898 CXX test/cpp_headers/json.o 00:02:57.898 LINK spdk_dd 00:02:57.898 CXX test/cpp_headers/jsonrpc.o 00:02:57.898 CXX test/cpp_headers/keyring.o 00:02:57.898 LINK spdk_trace 00:02:57.898 CXX test/cpp_headers/keyring_module.o 00:02:57.898 CXX test/cpp_headers/likely.o 00:02:57.898 LINK pci_ut 00:02:57.898 CXX test/cpp_headers/log.o 00:02:57.898 CXX test/cpp_headers/lvol.o 00:02:57.898 CXX test/cpp_headers/memory.o 00:02:57.898 CXX test/cpp_headers/mmio.o 00:02:57.898 CXX test/cpp_headers/nbd.o 00:02:57.898 CXX test/cpp_headers/net.o 00:02:57.898 CXX test/cpp_headers/notify.o 00:02:57.898 CXX test/cpp_headers/nvme.o 00:02:58.160 CXX test/cpp_headers/nvme_intel.o 00:02:58.160 CXX test/cpp_headers/nvme_ocssd.o 00:02:58.160 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:58.160 CXX test/cpp_headers/nvme_spec.o 00:02:58.160 CXX test/cpp_headers/nvme_zns.o 00:02:58.160 CXX test/cpp_headers/nvmf_cmd.o 00:02:58.160 LINK test_dma 00:02:58.160 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:58.160 CXX test/cpp_headers/nvmf.o 00:02:58.160 CXX test/cpp_headers/nvmf_spec.o 00:02:58.160 CXX test/cpp_headers/nvmf_transport.o 00:02:58.160 CXX test/cpp_headers/opal.o 00:02:58.160 CC test/event/event_perf/event_perf.o 00:02:58.160 CC test/event/reactor_perf/reactor_perf.o 00:02:58.160 CC test/event/reactor/reactor.o 00:02:58.160 LINK nvme_fuzz 00:02:58.160 CXX test/cpp_headers/opal_spec.o 00:02:58.160 CC test/event/app_repeat/app_repeat.o 00:02:58.423 CXX test/cpp_headers/pci_ids.o 00:02:58.423 CC examples/sock/hello_world/hello_sock.o 00:02:58.423 CXX test/cpp_headers/pipe.o 00:02:58.423 CC examples/vmd/lsvmd/lsvmd.o 00:02:58.423 LINK spdk_nvme 00:02:58.423 CC examples/idxd/perf/perf.o 00:02:58.423 CC test/event/scheduler/scheduler.o 00:02:58.423 CXX test/cpp_headers/queue.o 00:02:58.423 CXX test/cpp_headers/reduce.o 00:02:58.423 LINK spdk_bdev 00:02:58.423 CC examples/vmd/led/led.o 00:02:58.423 CXX test/cpp_headers/rpc.o 00:02:58.423 CC examples/thread/thread/thread_ex.o 00:02:58.423 CXX test/cpp_headers/scheduler.o 00:02:58.423 CXX test/cpp_headers/scsi.o 00:02:58.423 CXX test/cpp_headers/scsi_spec.o 00:02:58.423 CXX test/cpp_headers/sock.o 00:02:58.423 CXX test/cpp_headers/stdinc.o 00:02:58.423 CXX test/cpp_headers/string.o 00:02:58.423 CXX test/cpp_headers/thread.o 00:02:58.423 CXX test/cpp_headers/trace.o 00:02:58.423 CXX test/cpp_headers/trace_parser.o 00:02:58.423 CXX test/cpp_headers/tree.o 00:02:58.423 CXX test/cpp_headers/ublk.o 00:02:58.423 CXX test/cpp_headers/util.o 00:02:58.423 CXX test/cpp_headers/uuid.o 00:02:58.423 LINK reactor_perf 00:02:58.423 CXX test/cpp_headers/version.o 00:02:58.423 CXX test/cpp_headers/vfio_user_pci.o 00:02:58.423 LINK reactor 00:02:58.423 LINK event_perf 00:02:58.686 CC app/vhost/vhost.o 00:02:58.686 CXX test/cpp_headers/vfio_user_spec.o 00:02:58.686 CXX test/cpp_headers/vhost.o 00:02:58.686 LINK mem_callbacks 00:02:58.686 CXX test/cpp_headers/vmd.o 00:02:58.686 CXX test/cpp_headers/xor.o 00:02:58.686 CXX test/cpp_headers/zipf.o 00:02:58.686 LINK lsvmd 00:02:58.686 LINK app_repeat 00:02:58.686 LINK spdk_nvme_perf 00:02:58.686 LINK vhost_fuzz 00:02:58.686 LINK led 00:02:58.686 LINK spdk_nvme_identify 00:02:58.686 LINK spdk_top 00:02:58.686 LINK scheduler 00:02:58.945 LINK hello_sock 00:02:58.945 CC test/nvme/sgl/sgl.o 00:02:58.945 CC test/nvme/startup/startup.o 00:02:58.945 CC test/nvme/reset/reset.o 00:02:58.945 CC test/nvme/e2edp/nvme_dp.o 00:02:58.945 CC test/nvme/aer/aer.o 00:02:58.945 CC test/nvme/overhead/overhead.o 00:02:58.945 CC test/nvme/err_injection/err_injection.o 00:02:58.945 CC test/nvme/reserve/reserve.o 00:02:58.945 CC test/accel/dif/dif.o 00:02:58.945 CC test/nvme/simple_copy/simple_copy.o 00:02:58.945 CC test/blobfs/mkfs/mkfs.o 00:02:58.945 CC test/nvme/connect_stress/connect_stress.o 00:02:58.945 CC test/nvme/boot_partition/boot_partition.o 00:02:58.945 LINK thread 00:02:58.945 CC test/nvme/fused_ordering/fused_ordering.o 00:02:58.945 CC test/nvme/compliance/nvme_compliance.o 00:02:58.945 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:58.945 CC test/nvme/cuse/cuse.o 00:02:58.945 CC test/nvme/fdp/fdp.o 00:02:58.945 LINK vhost 00:02:58.945 CC test/lvol/esnap/esnap.o 00:02:58.945 LINK idxd_perf 00:02:59.204 LINK err_injection 00:02:59.204 LINK boot_partition 00:02:59.204 LINK connect_stress 00:02:59.204 LINK doorbell_aers 00:02:59.204 LINK startup 00:02:59.204 LINK sgl 00:02:59.204 LINK reserve 00:02:59.204 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:59.204 CC examples/nvme/hello_world/hello_world.o 00:02:59.204 CC examples/nvme/reconnect/reconnect.o 00:02:59.204 CC examples/nvme/abort/abort.o 00:02:59.204 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:59.204 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:59.204 CC examples/nvme/hotplug/hotplug.o 00:02:59.204 CC examples/nvme/arbitration/arbitration.o 00:02:59.204 LINK memory_ut 00:02:59.204 LINK mkfs 00:02:59.204 LINK nvme_dp 00:02:59.204 LINK overhead 00:02:59.204 LINK fused_ordering 00:02:59.204 LINK nvme_compliance 00:02:59.204 LINK aer 00:02:59.204 LINK simple_copy 00:02:59.462 LINK reset 00:02:59.462 CC examples/accel/perf/accel_perf.o 00:02:59.462 CC examples/blob/hello_world/hello_blob.o 00:02:59.462 CC examples/blob/cli/blobcli.o 00:02:59.462 LINK fdp 00:02:59.462 LINK dif 00:02:59.462 LINK pmr_persistence 00:02:59.462 LINK cmb_copy 00:02:59.720 LINK hello_world 00:02:59.721 LINK hotplug 00:02:59.721 LINK reconnect 00:02:59.721 LINK arbitration 00:02:59.721 LINK hello_blob 00:02:59.721 LINK abort 00:02:59.721 LINK nvme_manage 00:02:59.979 CC test/bdev/bdevio/bdevio.o 00:02:59.979 LINK accel_perf 00:02:59.979 LINK blobcli 00:02:59.979 LINK iscsi_fuzz 00:03:00.236 CC examples/bdev/hello_world/hello_bdev.o 00:03:00.236 CC examples/bdev/bdevperf/bdevperf.o 00:03:00.236 LINK bdevio 00:03:00.494 LINK cuse 00:03:00.494 LINK hello_bdev 00:03:01.061 LINK bdevperf 00:03:01.319 CC examples/nvmf/nvmf/nvmf.o 00:03:01.577 LINK nvmf 00:03:04.109 LINK esnap 00:03:04.367 00:03:04.367 real 0m41.218s 00:03:04.367 user 7m26.059s 00:03:04.367 sys 1m48.190s 00:03:04.367 23:09:01 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:04.367 23:09:01 make -- common/autotest_common.sh@10 -- $ set +x 00:03:04.367 ************************************ 00:03:04.367 END TEST make 00:03:04.367 ************************************ 00:03:04.367 23:09:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:04.367 23:09:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:04.367 23:09:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:04.367 23:09:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.367 23:09:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:04.367 23:09:02 -- pm/common@44 -- $ pid=1148316 00:03:04.367 23:09:02 -- pm/common@50 -- $ kill -TERM 1148316 00:03:04.367 23:09:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.367 23:09:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:04.367 23:09:02 -- pm/common@44 -- $ pid=1148318 00:03:04.367 23:09:02 -- pm/common@50 -- $ kill -TERM 1148318 00:03:04.367 23:09:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.367 23:09:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:04.367 23:09:02 -- pm/common@44 -- $ pid=1148320 00:03:04.367 23:09:02 -- pm/common@50 -- $ kill -TERM 1148320 00:03:04.367 23:09:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.367 23:09:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:04.367 23:09:02 -- pm/common@44 -- $ pid=1148348 00:03:04.367 23:09:02 -- pm/common@50 -- $ sudo -E kill -TERM 1148348 00:03:04.367 23:09:02 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:04.367 23:09:02 -- nvmf/common.sh@7 -- # uname -s 00:03:04.367 23:09:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:04.367 23:09:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:04.367 23:09:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:04.367 23:09:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:04.626 23:09:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:04.627 23:09:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:04.627 23:09:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:04.627 23:09:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:04.627 23:09:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:04.627 23:09:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:04.627 23:09:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:04.627 23:09:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:04.627 23:09:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:04.627 23:09:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:04.627 23:09:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:04.627 23:09:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:04.627 23:09:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:04.627 23:09:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:04.627 23:09:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:04.627 23:09:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:04.627 23:09:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.627 23:09:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.627 23:09:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.627 23:09:02 -- paths/export.sh@5 -- # export PATH 00:03:04.627 23:09:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.627 23:09:02 -- nvmf/common.sh@47 -- # : 0 00:03:04.627 23:09:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:04.627 23:09:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:04.627 23:09:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:04.627 23:09:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:04.627 23:09:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:04.627 23:09:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:04.627 23:09:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:04.627 23:09:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:04.627 23:09:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:04.627 23:09:02 -- spdk/autotest.sh@32 -- # uname -s 00:03:04.627 23:09:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:04.627 23:09:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:04.627 23:09:02 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:04.627 23:09:02 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:04.627 23:09:02 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:04.627 23:09:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:04.627 23:09:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:04.627 23:09:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:04.627 23:09:02 -- spdk/autotest.sh@48 -- # udevadm_pid=1219810 00:03:04.627 23:09:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:04.627 23:09:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:04.627 23:09:02 -- pm/common@17 -- # local monitor 00:03:04.627 23:09:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.627 23:09:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.627 23:09:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.627 23:09:02 -- pm/common@21 -- # date +%s 00:03:04.627 23:09:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.627 23:09:02 -- pm/common@21 -- # date +%s 00:03:04.627 23:09:02 -- pm/common@25 -- # sleep 1 00:03:04.627 23:09:02 -- pm/common@21 -- # date +%s 00:03:04.627 23:09:02 -- pm/common@21 -- # date +%s 00:03:04.627 23:09:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721941742 00:03:04.627 23:09:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721941742 00:03:04.627 23:09:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721941742 00:03:04.627 23:09:02 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721941742 00:03:04.627 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721941742_collect-vmstat.pm.log 00:03:04.627 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721941742_collect-cpu-load.pm.log 00:03:04.627 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721941742_collect-cpu-temp.pm.log 00:03:04.627 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721941742_collect-bmc-pm.bmc.pm.log 00:03:05.563 23:09:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:05.563 23:09:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:05.563 23:09:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:05.563 23:09:03 -- common/autotest_common.sh@10 -- # set +x 00:03:05.563 23:09:03 -- spdk/autotest.sh@59 -- # create_test_list 00:03:05.563 23:09:03 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:05.563 23:09:03 -- common/autotest_common.sh@10 -- # set +x 00:03:05.563 23:09:03 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:05.563 23:09:03 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:05.563 23:09:03 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:05.563 23:09:03 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:05.563 23:09:03 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:05.563 23:09:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:05.563 23:09:03 -- common/autotest_common.sh@1455 -- # uname 00:03:05.563 23:09:03 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:05.563 23:09:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:05.563 23:09:03 -- common/autotest_common.sh@1475 -- # uname 00:03:05.563 23:09:03 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:05.563 23:09:03 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:05.563 23:09:03 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:05.563 23:09:03 -- spdk/autotest.sh@72 -- # hash lcov 00:03:05.563 23:09:03 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:05.563 23:09:03 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:05.563 --rc lcov_branch_coverage=1 00:03:05.563 --rc lcov_function_coverage=1 00:03:05.563 --rc genhtml_branch_coverage=1 00:03:05.563 --rc genhtml_function_coverage=1 00:03:05.563 --rc genhtml_legend=1 00:03:05.563 --rc geninfo_all_blocks=1 00:03:05.563 ' 00:03:05.563 23:09:03 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:05.563 --rc lcov_branch_coverage=1 00:03:05.563 --rc lcov_function_coverage=1 00:03:05.563 --rc genhtml_branch_coverage=1 00:03:05.563 --rc genhtml_function_coverage=1 00:03:05.563 --rc genhtml_legend=1 00:03:05.563 --rc geninfo_all_blocks=1 00:03:05.563 ' 00:03:05.563 23:09:03 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:05.563 --rc lcov_branch_coverage=1 00:03:05.563 --rc lcov_function_coverage=1 00:03:05.563 --rc genhtml_branch_coverage=1 00:03:05.563 --rc genhtml_function_coverage=1 00:03:05.563 --rc genhtml_legend=1 00:03:05.563 --rc geninfo_all_blocks=1 00:03:05.563 --no-external' 00:03:05.563 23:09:03 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:05.563 --rc lcov_branch_coverage=1 00:03:05.563 --rc lcov_function_coverage=1 00:03:05.563 --rc genhtml_branch_coverage=1 00:03:05.563 --rc genhtml_function_coverage=1 00:03:05.563 --rc genhtml_legend=1 00:03:05.563 --rc geninfo_all_blocks=1 00:03:05.563 --no-external' 00:03:05.563 23:09:03 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:05.563 lcov: LCOV version 1.14 00:03:05.563 23:09:03 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:23.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:23.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:35.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:35.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:35.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:35.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:35.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:35.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:35.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:35.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:35.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:35.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:35.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:35.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:35.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:35.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:35.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:35.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:35.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:35.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:35.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:35.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:35.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:35.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:35.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:35.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:35.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:35.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:35.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:35.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:35.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:35.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:35.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:35.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:35.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:35.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:35.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:38.406 23:09:35 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:38.406 23:09:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:38.406 23:09:35 -- common/autotest_common.sh@10 -- # set +x 00:03:38.406 23:09:35 -- spdk/autotest.sh@91 -- # rm -f 00:03:38.406 23:09:35 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.338 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:39.338 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:39.338 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:39.338 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:39.338 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:39.338 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:39.338 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:39.597 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:39.597 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:39.597 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:39.597 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:39.597 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:39.597 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:39.597 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:39.597 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:39.597 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:39.597 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:39.597 23:09:37 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:39.597 23:09:37 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:39.597 23:09:37 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:39.597 23:09:37 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:39.597 23:09:37 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.597 23:09:37 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:39.597 23:09:37 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:39.597 23:09:37 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:39.597 23:09:37 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.597 23:09:37 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:39.597 23:09:37 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:39.597 23:09:37 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:39.597 23:09:37 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:39.597 23:09:37 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:39.597 23:09:37 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:39.855 No valid GPT data, bailing 00:03:39.855 23:09:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:39.856 23:09:37 -- scripts/common.sh@391 -- # pt= 00:03:39.856 23:09:37 -- scripts/common.sh@392 -- # return 1 00:03:39.856 23:09:37 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:39.856 1+0 records in 00:03:39.856 1+0 records out 00:03:39.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00162655 s, 645 MB/s 00:03:39.856 23:09:37 -- spdk/autotest.sh@118 -- # sync 00:03:39.856 23:09:37 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:39.856 23:09:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:39.856 23:09:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:41.755 23:09:39 -- spdk/autotest.sh@124 -- # uname -s 00:03:41.755 23:09:39 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:41.755 23:09:39 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:41.755 23:09:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.755 23:09:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.755 23:09:39 -- common/autotest_common.sh@10 -- # set +x 00:03:41.755 ************************************ 00:03:41.755 START TEST setup.sh 00:03:41.755 ************************************ 00:03:41.755 23:09:39 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:41.755 * Looking for test storage... 00:03:41.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:41.755 23:09:39 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:41.755 23:09:39 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:41.755 23:09:39 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:41.755 23:09:39 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.755 23:09:39 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.755 23:09:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:41.755 ************************************ 00:03:41.755 START TEST acl 00:03:41.755 ************************************ 00:03:41.755 23:09:39 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:41.755 * Looking for test storage... 00:03:41.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:41.755 23:09:39 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:41.755 23:09:39 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:41.755 23:09:39 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:41.755 23:09:39 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:41.755 23:09:39 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.755 23:09:39 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:41.755 23:09:39 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:41.755 23:09:39 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:41.755 23:09:39 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.755 23:09:39 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:41.755 23:09:39 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:41.755 23:09:39 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:41.755 23:09:39 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:41.755 23:09:39 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:41.755 23:09:39 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.755 23:09:39 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.131 23:09:40 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:43.131 23:09:40 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:43.131 23:09:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.131 23:09:40 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:43.131 23:09:40 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.131 23:09:40 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:44.509 Hugepages 00:03:44.509 node hugesize free / total 00:03:44.509 23:09:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:44.509 23:09:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:44.509 23:09:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.509 00:03:44.509 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.509 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:44.510 23:09:42 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:44.510 23:09:42 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:44.510 23:09:42 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:44.510 23:09:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:44.510 ************************************ 00:03:44.510 START TEST denied 00:03:44.510 ************************************ 00:03:44.510 23:09:42 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:44.510 23:09:42 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:44.510 23:09:42 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:44.510 23:09:42 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:44.510 23:09:42 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.510 23:09:42 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.887 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:45.887 23:09:43 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:45.887 23:09:43 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:45.887 23:09:43 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:45.887 23:09:43 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:45.887 23:09:43 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:45.887 23:09:43 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:45.887 23:09:43 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:45.887 23:09:43 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:45.887 23:09:43 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.887 23:09:43 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.417 00:03:48.417 real 0m3.896s 00:03:48.417 user 0m1.161s 00:03:48.417 sys 0m1.813s 00:03:48.417 23:09:46 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.417 23:09:46 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:48.417 ************************************ 00:03:48.417 END TEST denied 00:03:48.417 ************************************ 00:03:48.417 23:09:46 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:48.417 23:09:46 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:48.417 23:09:46 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:48.417 23:09:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:48.417 ************************************ 00:03:48.417 START TEST allowed 00:03:48.417 ************************************ 00:03:48.417 23:09:46 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:48.417 23:09:46 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:48.417 23:09:46 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:48.417 23:09:46 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:48.417 23:09:46 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.417 23:09:46 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:50.949 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:50.949 23:09:48 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:50.949 23:09:48 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:50.949 23:09:48 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:50.949 23:09:48 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.949 23:09:48 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.325 00:03:52.325 real 0m3.899s 00:03:52.325 user 0m1.027s 00:03:52.325 sys 0m1.713s 00:03:52.325 23:09:49 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.326 23:09:49 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:52.326 ************************************ 00:03:52.326 END TEST allowed 00:03:52.326 ************************************ 00:03:52.326 00:03:52.326 real 0m10.583s 00:03:52.326 user 0m3.358s 00:03:52.326 sys 0m5.220s 00:03:52.326 23:09:50 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.326 23:09:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:52.326 ************************************ 00:03:52.326 END TEST acl 00:03:52.326 ************************************ 00:03:52.326 23:09:50 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:52.326 23:09:50 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.326 23:09:50 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.326 23:09:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:52.326 ************************************ 00:03:52.326 START TEST hugepages 00:03:52.326 ************************************ 00:03:52.326 23:09:50 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:52.586 * Looking for test storage... 00:03:52.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:52.586 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:52.586 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:52.586 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:52.586 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:52.586 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:52.586 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:52.586 23:09:50 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:52.586 23:09:50 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:52.586 23:09:50 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:52.586 23:09:50 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:52.586 23:09:50 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.586 23:09:50 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.586 23:09:50 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.586 23:09:50 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.586 23:09:50 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.586 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.586 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 42300196 kB' 'MemAvailable: 45789184 kB' 'Buffers: 2704 kB' 'Cached: 11696312 kB' 'SwapCached: 0 kB' 'Active: 8681976 kB' 'Inactive: 3491980 kB' 'Active(anon): 8289192 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478360 kB' 'Mapped: 180100 kB' 'Shmem: 7814252 kB' 'KReclaimable: 196124 kB' 'Slab: 564360 kB' 'SReclaimable: 196124 kB' 'SUnreclaim: 368236 kB' 'KernelStack: 12704 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 9377004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195824 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.587 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.588 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:52.589 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:52.589 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:52.589 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.589 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.589 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.589 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.589 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:52.589 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.589 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.589 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.589 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.589 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:52.589 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:52.589 23:09:50 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:52.589 23:09:50 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.589 23:09:50 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.589 23:09:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.589 ************************************ 00:03:52.589 START TEST default_setup 00:03:52.589 ************************************ 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.589 23:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:53.970 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:53.970 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:53.970 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:53.970 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:53.970 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:53.970 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:53.970 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:53.970 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:53.970 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:53.970 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:53.970 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:53.970 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:53.970 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:53.970 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:53.970 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:53.970 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:54.935 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44399128 kB' 'MemAvailable: 47888084 kB' 'Buffers: 2704 kB' 'Cached: 11696408 kB' 'SwapCached: 0 kB' 'Active: 8700496 kB' 'Inactive: 3491980 kB' 'Active(anon): 8307712 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496600 kB' 'Mapped: 180148 kB' 'Shmem: 7814348 kB' 'KReclaimable: 196060 kB' 'Slab: 564024 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 367964 kB' 'KernelStack: 12560 kB' 'PageTables: 7572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9398100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.935 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.936 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44401272 kB' 'MemAvailable: 47890228 kB' 'Buffers: 2704 kB' 'Cached: 11696412 kB' 'SwapCached: 0 kB' 'Active: 8700796 kB' 'Inactive: 3491980 kB' 'Active(anon): 8308012 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496884 kB' 'Mapped: 180140 kB' 'Shmem: 7814352 kB' 'KReclaimable: 196060 kB' 'Slab: 563988 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 367928 kB' 'KernelStack: 12592 kB' 'PageTables: 7592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9398120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.937 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.938 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44401636 kB' 'MemAvailable: 47890592 kB' 'Buffers: 2704 kB' 'Cached: 11696424 kB' 'SwapCached: 0 kB' 'Active: 8700220 kB' 'Inactive: 3491980 kB' 'Active(anon): 8307436 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496260 kB' 'Mapped: 180140 kB' 'Shmem: 7814364 kB' 'KReclaimable: 196060 kB' 'Slab: 564128 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 368068 kB' 'KernelStack: 12640 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9398140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.939 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.940 nr_hugepages=1024 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.940 resv_hugepages=0 00:03:54.940 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.941 surplus_hugepages=0 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.941 anon_hugepages=0 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44403232 kB' 'MemAvailable: 47892188 kB' 'Buffers: 2704 kB' 'Cached: 11696452 kB' 'SwapCached: 0 kB' 'Active: 8700280 kB' 'Inactive: 3491980 kB' 'Active(anon): 8307496 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496332 kB' 'Mapped: 180140 kB' 'Shmem: 7814392 kB' 'KReclaimable: 196060 kB' 'Slab: 564128 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 368068 kB' 'KernelStack: 12672 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9398164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.941 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.942 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20983996 kB' 'MemUsed: 11892944 kB' 'SwapCached: 0 kB' 'Active: 5291848 kB' 'Inactive: 3354804 kB' 'Active(anon): 5019956 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8542012 kB' 'Mapped: 110792 kB' 'AnonPages: 107768 kB' 'Shmem: 4915316 kB' 'KernelStack: 6808 kB' 'PageTables: 2900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83728 kB' 'Slab: 301804 kB' 'SReclaimable: 83728 kB' 'SUnreclaim: 218076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.943 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:54.944 node0=1024 expecting 1024 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:54.944 00:03:54.944 real 0m2.431s 00:03:54.944 user 0m0.646s 00:03:54.944 sys 0m0.864s 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.944 23:09:52 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:54.944 ************************************ 00:03:54.944 END TEST default_setup 00:03:54.944 ************************************ 00:03:54.944 23:09:52 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:54.944 23:09:52 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.944 23:09:52 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.944 23:09:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.944 ************************************ 00:03:54.944 START TEST per_node_1G_alloc 00:03:54.944 ************************************ 00:03:54.944 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:54.944 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:54.944 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:54.944 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:54.945 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:54.945 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:55.203 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:55.203 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:55.203 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.204 23:09:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:56.138 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:56.138 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:56.138 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:56.138 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:56.138 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:56.138 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:56.138 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:56.138 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:56.138 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:56.138 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:56.138 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:56.138 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:56.138 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:56.138 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:56.138 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:56.138 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:56.138 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44388648 kB' 'MemAvailable: 47877604 kB' 'Buffers: 2704 kB' 'Cached: 11696520 kB' 'SwapCached: 0 kB' 'Active: 8700268 kB' 'Inactive: 3491980 kB' 'Active(anon): 8307484 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496240 kB' 'Mapped: 180152 kB' 'Shmem: 7814460 kB' 'KReclaimable: 196060 kB' 'Slab: 564060 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 368000 kB' 'KernelStack: 12688 kB' 'PageTables: 7796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9398360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.403 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.404 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44390044 kB' 'MemAvailable: 47879000 kB' 'Buffers: 2704 kB' 'Cached: 11696524 kB' 'SwapCached: 0 kB' 'Active: 8700304 kB' 'Inactive: 3491980 kB' 'Active(anon): 8307520 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496252 kB' 'Mapped: 180152 kB' 'Shmem: 7814464 kB' 'KReclaimable: 196060 kB' 'Slab: 564040 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 367980 kB' 'KernelStack: 12688 kB' 'PageTables: 7764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9398380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.405 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.406 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44394704 kB' 'MemAvailable: 47883660 kB' 'Buffers: 2704 kB' 'Cached: 11696524 kB' 'SwapCached: 0 kB' 'Active: 8700568 kB' 'Inactive: 3491980 kB' 'Active(anon): 8307784 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496532 kB' 'Mapped: 180152 kB' 'Shmem: 7814464 kB' 'KReclaimable: 196060 kB' 'Slab: 564040 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 367980 kB' 'KernelStack: 12720 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9398400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.407 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.408 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.409 nr_hugepages=1024 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.409 resv_hugepages=0 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.409 surplus_hugepages=0 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.409 anon_hugepages=0 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.409 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44394036 kB' 'MemAvailable: 47882992 kB' 'Buffers: 2704 kB' 'Cached: 11696564 kB' 'SwapCached: 0 kB' 'Active: 8700520 kB' 'Inactive: 3491980 kB' 'Active(anon): 8307736 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496464 kB' 'Mapped: 180152 kB' 'Shmem: 7814504 kB' 'KReclaimable: 196060 kB' 'Slab: 564104 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 368044 kB' 'KernelStack: 12704 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9398424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.410 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:56.411 23:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.411 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.411 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.411 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.411 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.411 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.411 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.411 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.411 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22032080 kB' 'MemUsed: 10844860 kB' 'SwapCached: 0 kB' 'Active: 5291856 kB' 'Inactive: 3354804 kB' 'Active(anon): 5019964 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8542056 kB' 'Mapped: 110804 kB' 'AnonPages: 107736 kB' 'Shmem: 4915360 kB' 'KernelStack: 6840 kB' 'PageTables: 2956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83728 kB' 'Slab: 301828 kB' 'SReclaimable: 83728 kB' 'SUnreclaim: 218100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.412 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22360948 kB' 'MemUsed: 5303824 kB' 'SwapCached: 0 kB' 'Active: 3408680 kB' 'Inactive: 137176 kB' 'Active(anon): 3287788 kB' 'Inactive(anon): 0 kB' 'Active(file): 120892 kB' 'Inactive(file): 137176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3157256 kB' 'Mapped: 69348 kB' 'AnonPages: 388716 kB' 'Shmem: 2899188 kB' 'KernelStack: 5864 kB' 'PageTables: 4888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112332 kB' 'Slab: 262276 kB' 'SReclaimable: 112332 kB' 'SUnreclaim: 149944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.413 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.414 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.415 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.415 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:56.415 node0=512 expecting 512 00:03:56.415 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.415 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.415 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.415 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:56.415 node1=512 expecting 512 00:03:56.415 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:56.415 00:03:56.415 real 0m1.408s 00:03:56.415 user 0m0.593s 00:03:56.415 sys 0m0.775s 00:03:56.415 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:56.415 23:09:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:56.415 ************************************ 00:03:56.415 END TEST per_node_1G_alloc 00:03:56.415 ************************************ 00:03:56.415 23:09:54 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:56.415 23:09:54 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.415 23:09:54 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.415 23:09:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.415 ************************************ 00:03:56.415 START TEST even_2G_alloc 00:03:56.415 ************************************ 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.415 23:09:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:57.796 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:57.796 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:57.796 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:57.796 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:57.796 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:57.796 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:57.796 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:57.796 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:57.796 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:57.796 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:57.796 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:57.796 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:57.796 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:57.796 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:57.796 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:57.796 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:57.796 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44410700 kB' 'MemAvailable: 47899960 kB' 'Buffers: 2704 kB' 'Cached: 11696660 kB' 'SwapCached: 0 kB' 'Active: 8701216 kB' 'Inactive: 3491980 kB' 'Active(anon): 8308432 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496928 kB' 'Mapped: 180284 kB' 'Shmem: 7814600 kB' 'KReclaimable: 196060 kB' 'Slab: 564060 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 368000 kB' 'KernelStack: 12704 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9398660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.796 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.797 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44410832 kB' 'MemAvailable: 47899788 kB' 'Buffers: 2704 kB' 'Cached: 11696664 kB' 'SwapCached: 0 kB' 'Active: 8700768 kB' 'Inactive: 3491980 kB' 'Active(anon): 8307984 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496560 kB' 'Mapped: 180176 kB' 'Shmem: 7814604 kB' 'KReclaimable: 196060 kB' 'Slab: 564088 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 368028 kB' 'KernelStack: 12720 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9398680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.798 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.799 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44410832 kB' 'MemAvailable: 47899788 kB' 'Buffers: 2704 kB' 'Cached: 11696680 kB' 'SwapCached: 0 kB' 'Active: 8700784 kB' 'Inactive: 3491980 kB' 'Active(anon): 8308000 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496556 kB' 'Mapped: 180176 kB' 'Shmem: 7814620 kB' 'KReclaimable: 196060 kB' 'Slab: 564088 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 368028 kB' 'KernelStack: 12720 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9398700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.800 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.801 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:57.802 nr_hugepages=1024 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.802 resv_hugepages=0 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.802 surplus_hugepages=0 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.802 anon_hugepages=0 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:57.802 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44410584 kB' 'MemAvailable: 47899540 kB' 'Buffers: 2704 kB' 'Cached: 11696704 kB' 'SwapCached: 0 kB' 'Active: 8700808 kB' 'Inactive: 3491980 kB' 'Active(anon): 8308024 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496556 kB' 'Mapped: 180176 kB' 'Shmem: 7814644 kB' 'KReclaimable: 196060 kB' 'Slab: 564088 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 368028 kB' 'KernelStack: 12720 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9398724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.803 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.804 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22032088 kB' 'MemUsed: 10844852 kB' 'SwapCached: 0 kB' 'Active: 5292108 kB' 'Inactive: 3354804 kB' 'Active(anon): 5020216 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8542056 kB' 'Mapped: 110828 kB' 'AnonPages: 107952 kB' 'Shmem: 4915360 kB' 'KernelStack: 6840 kB' 'PageTables: 2948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83728 kB' 'Slab: 301792 kB' 'SReclaimable: 83728 kB' 'SUnreclaim: 218064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.805 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.806 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22379792 kB' 'MemUsed: 5284980 kB' 'SwapCached: 0 kB' 'Active: 3408536 kB' 'Inactive: 137176 kB' 'Active(anon): 3287644 kB' 'Inactive(anon): 0 kB' 'Active(file): 120892 kB' 'Inactive(file): 137176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3157392 kB' 'Mapped: 69348 kB' 'AnonPages: 388380 kB' 'Shmem: 2899324 kB' 'KernelStack: 5864 kB' 'PageTables: 4820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112332 kB' 'Slab: 262264 kB' 'SReclaimable: 112332 kB' 'SUnreclaim: 149932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.807 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:57.808 node0=512 expecting 512 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:57.808 node1=512 expecting 512 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:57.808 00:03:57.808 real 0m1.378s 00:03:57.808 user 0m0.600s 00:03:57.808 sys 0m0.730s 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:57.808 23:09:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:57.808 ************************************ 00:03:57.808 END TEST even_2G_alloc 00:03:57.808 ************************************ 00:03:57.808 23:09:55 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:57.808 23:09:55 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:57.808 23:09:55 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.808 23:09:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.067 ************************************ 00:03:58.067 START TEST odd_alloc 00:03:58.067 ************************************ 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.067 23:09:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.005 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.005 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.005 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.005 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.005 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.005 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.005 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.005 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.005 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:59.005 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.005 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.005 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.005 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.005 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.005 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.005 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.005 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44392444 kB' 'MemAvailable: 47881400 kB' 'Buffers: 2704 kB' 'Cached: 11696792 kB' 'SwapCached: 0 kB' 'Active: 8696912 kB' 'Inactive: 3491980 kB' 'Active(anon): 8304128 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492568 kB' 'Mapped: 179344 kB' 'Shmem: 7814732 kB' 'KReclaimable: 196060 kB' 'Slab: 564164 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 368104 kB' 'KernelStack: 12592 kB' 'PageTables: 7280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9383196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.005 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.006 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44391416 kB' 'MemAvailable: 47880372 kB' 'Buffers: 2704 kB' 'Cached: 11696796 kB' 'SwapCached: 0 kB' 'Active: 8697064 kB' 'Inactive: 3491980 kB' 'Active(anon): 8304280 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492776 kB' 'Mapped: 179420 kB' 'Shmem: 7814736 kB' 'KReclaimable: 196060 kB' 'Slab: 564196 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 368136 kB' 'KernelStack: 12560 kB' 'PageTables: 7188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9383216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.007 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.272 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.272 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.272 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.273 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44391816 kB' 'MemAvailable: 47880772 kB' 'Buffers: 2704 kB' 'Cached: 11696816 kB' 'SwapCached: 0 kB' 'Active: 8697280 kB' 'Inactive: 3491980 kB' 'Active(anon): 8304496 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492956 kB' 'Mapped: 179344 kB' 'Shmem: 7814756 kB' 'KReclaimable: 196060 kB' 'Slab: 564152 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 368092 kB' 'KernelStack: 12624 kB' 'PageTables: 7228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9383604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.274 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.275 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:59.276 nr_hugepages=1025 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.276 resv_hugepages=0 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.276 surplus_hugepages=0 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.276 anon_hugepages=0 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44395316 kB' 'MemAvailable: 47884272 kB' 'Buffers: 2704 kB' 'Cached: 11696836 kB' 'SwapCached: 0 kB' 'Active: 8699172 kB' 'Inactive: 3491980 kB' 'Active(anon): 8306388 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494876 kB' 'Mapped: 179780 kB' 'Shmem: 7814776 kB' 'KReclaimable: 196060 kB' 'Slab: 564144 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 368084 kB' 'KernelStack: 12704 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 9386168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.276 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.277 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22016824 kB' 'MemUsed: 10860116 kB' 'SwapCached: 0 kB' 'Active: 5296012 kB' 'Inactive: 3354804 kB' 'Active(anon): 5024120 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8542132 kB' 'Mapped: 110676 kB' 'AnonPages: 111836 kB' 'Shmem: 4915436 kB' 'KernelStack: 6808 kB' 'PageTables: 2816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83728 kB' 'Slab: 301736 kB' 'SReclaimable: 83728 kB' 'SUnreclaim: 218008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.278 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.279 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22372908 kB' 'MemUsed: 5291864 kB' 'SwapCached: 0 kB' 'Active: 3406672 kB' 'Inactive: 137176 kB' 'Active(anon): 3285780 kB' 'Inactive(anon): 0 kB' 'Active(file): 120892 kB' 'Inactive(file): 137176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3157432 kB' 'Mapped: 69256 kB' 'AnonPages: 386500 kB' 'Shmem: 2899364 kB' 'KernelStack: 5880 kB' 'PageTables: 4640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112332 kB' 'Slab: 262408 kB' 'SReclaimable: 112332 kB' 'SUnreclaim: 150076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.280 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:59.281 node0=512 expecting 513 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:59.281 node1=513 expecting 512 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:59.281 00:03:59.281 real 0m1.325s 00:03:59.281 user 0m0.563s 00:03:59.281 sys 0m0.719s 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.281 23:09:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.281 ************************************ 00:03:59.281 END TEST odd_alloc 00:03:59.281 ************************************ 00:03:59.281 23:09:56 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:59.281 23:09:56 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:59.281 23:09:56 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:59.281 23:09:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.281 ************************************ 00:03:59.281 START TEST custom_alloc 00:03:59.281 ************************************ 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:59.281 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.282 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.282 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.282 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.282 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.282 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.282 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.282 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:59.282 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:59.282 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:59.282 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:59.282 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:59.282 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:59.282 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:59.282 23:09:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:59.282 23:09:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.282 23:09:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.660 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:00.660 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:00.660 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:00.660 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:00.660 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:00.660 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:00.660 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:00.660 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:00.660 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:00.660 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:00.660 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:00.660 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:00.660 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:00.660 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:00.660 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:00.660 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:00.660 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.660 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43340508 kB' 'MemAvailable: 46829464 kB' 'Buffers: 2704 kB' 'Cached: 11696928 kB' 'SwapCached: 0 kB' 'Active: 8698428 kB' 'Inactive: 3491980 kB' 'Active(anon): 8305644 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493992 kB' 'Mapped: 179492 kB' 'Shmem: 7814868 kB' 'KReclaimable: 196060 kB' 'Slab: 563736 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 367676 kB' 'KernelStack: 12704 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9383824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.661 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43341428 kB' 'MemAvailable: 46830384 kB' 'Buffers: 2704 kB' 'Cached: 11696932 kB' 'SwapCached: 0 kB' 'Active: 8697584 kB' 'Inactive: 3491980 kB' 'Active(anon): 8304800 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493180 kB' 'Mapped: 179428 kB' 'Shmem: 7814872 kB' 'KReclaimable: 196060 kB' 'Slab: 563736 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 367676 kB' 'KernelStack: 12720 kB' 'PageTables: 7436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9383844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.662 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.663 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43341648 kB' 'MemAvailable: 46830604 kB' 'Buffers: 2704 kB' 'Cached: 11696944 kB' 'SwapCached: 0 kB' 'Active: 8697500 kB' 'Inactive: 3491980 kB' 'Active(anon): 8304716 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493008 kB' 'Mapped: 179352 kB' 'Shmem: 7814884 kB' 'KReclaimable: 196060 kB' 'Slab: 563720 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 367660 kB' 'KernelStack: 12736 kB' 'PageTables: 7432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9383864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:00.666 nr_hugepages=1536 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.666 resv_hugepages=0 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.666 surplus_hugepages=0 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.666 anon_hugepages=0 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43342640 kB' 'MemAvailable: 46831596 kB' 'Buffers: 2704 kB' 'Cached: 11696984 kB' 'SwapCached: 0 kB' 'Active: 8697736 kB' 'Inactive: 3491980 kB' 'Active(anon): 8304952 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493256 kB' 'Mapped: 179352 kB' 'Shmem: 7814924 kB' 'KReclaimable: 196060 kB' 'Slab: 563720 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 367660 kB' 'KernelStack: 12752 kB' 'PageTables: 7492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 9383884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22022920 kB' 'MemUsed: 10854020 kB' 'SwapCached: 0 kB' 'Active: 5291188 kB' 'Inactive: 3354804 kB' 'Active(anon): 5019296 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8542240 kB' 'Mapped: 110248 kB' 'AnonPages: 106940 kB' 'Shmem: 4915544 kB' 'KernelStack: 6824 kB' 'PageTables: 2856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83728 kB' 'Slab: 301488 kB' 'SReclaimable: 83728 kB' 'SUnreclaim: 217760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 21320732 kB' 'MemUsed: 6344040 kB' 'SwapCached: 0 kB' 'Active: 3406576 kB' 'Inactive: 137176 kB' 'Active(anon): 3285684 kB' 'Inactive(anon): 0 kB' 'Active(file): 120892 kB' 'Inactive(file): 137176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3157456 kB' 'Mapped: 69104 kB' 'AnonPages: 386300 kB' 'Shmem: 2899388 kB' 'KernelStack: 5928 kB' 'PageTables: 4636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 112332 kB' 'Slab: 262232 kB' 'SReclaimable: 112332 kB' 'SUnreclaim: 149900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:00.671 node0=512 expecting 512 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:00.671 node1=1024 expecting 1024 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:00.671 00:04:00.671 real 0m1.395s 00:04:00.671 user 0m0.591s 00:04:00.671 sys 0m0.767s 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.671 23:09:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.671 ************************************ 00:04:00.671 END TEST custom_alloc 00:04:00.671 ************************************ 00:04:00.671 23:09:58 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:00.671 23:09:58 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.671 23:09:58 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.671 23:09:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.671 ************************************ 00:04:00.671 START TEST no_shrink_alloc 00:04:00.671 ************************************ 00:04:00.671 23:09:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:00.671 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:00.671 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.672 23:09:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.053 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:02.053 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:02.053 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:02.053 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:02.053 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:02.053 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:02.053 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:02.053 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:02.053 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:02.053 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:02.053 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:02.053 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:02.053 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:02.053 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:02.053 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:02.053 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:02.053 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:02.053 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:02.053 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.053 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.053 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.053 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.053 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.053 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.053 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.053 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44386932 kB' 'MemAvailable: 47875888 kB' 'Buffers: 2704 kB' 'Cached: 11697052 kB' 'SwapCached: 0 kB' 'Active: 8698128 kB' 'Inactive: 3491980 kB' 'Active(anon): 8305344 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493160 kB' 'Mapped: 179376 kB' 'Shmem: 7814992 kB' 'KReclaimable: 196060 kB' 'Slab: 563948 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 367888 kB' 'KernelStack: 12752 kB' 'PageTables: 7440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9384444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.054 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44386976 kB' 'MemAvailable: 47875932 kB' 'Buffers: 2704 kB' 'Cached: 11697052 kB' 'SwapCached: 0 kB' 'Active: 8698752 kB' 'Inactive: 3491980 kB' 'Active(anon): 8305968 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493788 kB' 'Mapped: 179376 kB' 'Shmem: 7814992 kB' 'KReclaimable: 196060 kB' 'Slab: 563948 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 367888 kB' 'KernelStack: 12752 kB' 'PageTables: 7460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9384460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.055 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.056 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44387588 kB' 'MemAvailable: 47876544 kB' 'Buffers: 2704 kB' 'Cached: 11697056 kB' 'SwapCached: 0 kB' 'Active: 8698056 kB' 'Inactive: 3491980 kB' 'Active(anon): 8305272 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493028 kB' 'Mapped: 179364 kB' 'Shmem: 7814996 kB' 'KReclaimable: 196060 kB' 'Slab: 563956 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 367896 kB' 'KernelStack: 12752 kB' 'PageTables: 7360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9384484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.057 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.058 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.059 nr_hugepages=1024 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.059 resv_hugepages=0 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.059 surplus_hugepages=0 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.059 anon_hugepages=0 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.059 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44387336 kB' 'MemAvailable: 47876292 kB' 'Buffers: 2704 kB' 'Cached: 11697092 kB' 'SwapCached: 0 kB' 'Active: 8698296 kB' 'Inactive: 3491980 kB' 'Active(anon): 8305512 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493676 kB' 'Mapped: 179364 kB' 'Shmem: 7815032 kB' 'KReclaimable: 196060 kB' 'Slab: 564016 kB' 'SReclaimable: 196060 kB' 'SUnreclaim: 367956 kB' 'KernelStack: 12800 kB' 'PageTables: 7552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9384504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.060 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.061 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20984720 kB' 'MemUsed: 11892220 kB' 'SwapCached: 0 kB' 'Active: 5290816 kB' 'Inactive: 3354804 kB' 'Active(anon): 5018924 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8542360 kB' 'Mapped: 110260 kB' 'AnonPages: 106364 kB' 'Shmem: 4915664 kB' 'KernelStack: 6792 kB' 'PageTables: 2716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83728 kB' 'Slab: 301628 kB' 'SReclaimable: 83728 kB' 'SUnreclaim: 217900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.062 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.063 node0=1024 expecting 1024 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.063 23:09:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.998 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:02.998 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:02.998 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:02.998 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:02.998 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:02.998 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:02.998 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:02.998 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:02.998 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:02.998 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:02.998 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:02.998 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:02.998 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:02.998 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:03.262 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:03.262 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:03.262 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:03.262 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44367368 kB' 'MemAvailable: 47856328 kB' 'Buffers: 2704 kB' 'Cached: 11697160 kB' 'SwapCached: 0 kB' 'Active: 8700072 kB' 'Inactive: 3491980 kB' 'Active(anon): 8307288 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495636 kB' 'Mapped: 179384 kB' 'Shmem: 7815100 kB' 'KReclaimable: 196068 kB' 'Slab: 563864 kB' 'SReclaimable: 196068 kB' 'SUnreclaim: 367796 kB' 'KernelStack: 12768 kB' 'PageTables: 7396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9384556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.262 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.263 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44365948 kB' 'MemAvailable: 47854908 kB' 'Buffers: 2704 kB' 'Cached: 11697160 kB' 'SwapCached: 0 kB' 'Active: 8700052 kB' 'Inactive: 3491980 kB' 'Active(anon): 8307268 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495640 kB' 'Mapped: 179448 kB' 'Shmem: 7815100 kB' 'KReclaimable: 196068 kB' 'Slab: 563868 kB' 'SReclaimable: 196068 kB' 'SUnreclaim: 367800 kB' 'KernelStack: 12736 kB' 'PageTables: 7288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9384572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.264 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.265 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44366480 kB' 'MemAvailable: 47855440 kB' 'Buffers: 2704 kB' 'Cached: 11697184 kB' 'SwapCached: 0 kB' 'Active: 8699724 kB' 'Inactive: 3491980 kB' 'Active(anon): 8306940 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495284 kB' 'Mapped: 179372 kB' 'Shmem: 7815124 kB' 'KReclaimable: 196068 kB' 'Slab: 563896 kB' 'SReclaimable: 196068 kB' 'SUnreclaim: 367828 kB' 'KernelStack: 12784 kB' 'PageTables: 7436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9384596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.266 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.267 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.268 nr_hugepages=1024 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.268 resv_hugepages=0 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.268 surplus_hugepages=0 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.268 anon_hugepages=0 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44366880 kB' 'MemAvailable: 47855840 kB' 'Buffers: 2704 kB' 'Cached: 11697204 kB' 'SwapCached: 0 kB' 'Active: 8699968 kB' 'Inactive: 3491980 kB' 'Active(anon): 8307184 kB' 'Inactive(anon): 0 kB' 'Active(file): 392784 kB' 'Inactive(file): 3491980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495520 kB' 'Mapped: 179372 kB' 'Shmem: 7815144 kB' 'KReclaimable: 196068 kB' 'Slab: 563896 kB' 'SReclaimable: 196068 kB' 'SUnreclaim: 367828 kB' 'KernelStack: 12800 kB' 'PageTables: 7496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 9384616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1758812 kB' 'DirectMap2M: 14938112 kB' 'DirectMap1G: 52428800 kB' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.268 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.269 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20989828 kB' 'MemUsed: 11887112 kB' 'SwapCached: 0 kB' 'Active: 5292768 kB' 'Inactive: 3354804 kB' 'Active(anon): 5020876 kB' 'Inactive(anon): 0 kB' 'Active(file): 271892 kB' 'Inactive(file): 3354804 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8542464 kB' 'Mapped: 110268 kB' 'AnonPages: 108384 kB' 'Shmem: 4915768 kB' 'KernelStack: 6808 kB' 'PageTables: 2812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83744 kB' 'Slab: 301588 kB' 'SReclaimable: 83744 kB' 'SUnreclaim: 217844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.270 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.529 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.531 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.531 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.531 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.531 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.531 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.531 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.531 node0=1024 expecting 1024 00:04:03.531 23:10:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.531 00:04:03.531 real 0m2.673s 00:04:03.531 user 0m1.098s 00:04:03.531 sys 0m1.477s 00:04:03.531 23:10:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.531 23:10:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.531 ************************************ 00:04:03.531 END TEST no_shrink_alloc 00:04:03.531 ************************************ 00:04:03.531 23:10:01 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:03.531 23:10:01 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:03.531 23:10:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:03.531 23:10:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.531 23:10:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:03.531 23:10:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.531 23:10:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:03.531 23:10:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:03.531 23:10:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.531 23:10:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:03.531 23:10:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.531 23:10:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:03.531 23:10:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:03.531 23:10:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:03.531 00:04:03.531 real 0m10.983s 00:04:03.531 user 0m4.245s 00:04:03.531 sys 0m5.571s 00:04:03.531 23:10:01 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.531 23:10:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.531 ************************************ 00:04:03.531 END TEST hugepages 00:04:03.531 ************************************ 00:04:03.531 23:10:01 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:03.531 23:10:01 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.531 23:10:01 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.531 23:10:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:03.531 ************************************ 00:04:03.531 START TEST driver 00:04:03.531 ************************************ 00:04:03.531 23:10:01 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:03.531 * Looking for test storage... 00:04:03.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:03.531 23:10:01 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:03.531 23:10:01 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.531 23:10:01 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.060 23:10:03 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:06.060 23:10:03 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.061 23:10:03 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.061 23:10:03 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:06.061 ************************************ 00:04:06.061 START TEST guess_driver 00:04:06.061 ************************************ 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:06.061 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:06.061 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:06.061 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:06.061 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:06.061 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:06.061 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:06.061 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:06.061 Looking for driver=vfio-pci 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.061 23:10:03 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.437 23:10:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.372 23:10:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.372 23:10:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.372 23:10:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.372 23:10:05 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:08.372 23:10:05 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:08.372 23:10:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.372 23:10:05 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.921 00:04:10.921 real 0m4.901s 00:04:10.921 user 0m1.148s 00:04:10.921 sys 0m1.865s 00:04:10.921 23:10:08 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.921 23:10:08 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:10.921 ************************************ 00:04:10.921 END TEST guess_driver 00:04:10.921 ************************************ 00:04:10.921 00:04:10.921 real 0m7.412s 00:04:10.921 user 0m1.717s 00:04:10.921 sys 0m2.825s 00:04:10.921 23:10:08 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.921 23:10:08 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:10.921 ************************************ 00:04:10.921 END TEST driver 00:04:10.921 ************************************ 00:04:10.921 23:10:08 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:10.921 23:10:08 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.921 23:10:08 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.921 23:10:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.921 ************************************ 00:04:10.921 START TEST devices 00:04:10.921 ************************************ 00:04:10.921 23:10:08 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:10.921 * Looking for test storage... 00:04:10.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:10.921 23:10:08 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:10.921 23:10:08 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:10.921 23:10:08 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.921 23:10:08 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.824 23:10:10 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:12.825 23:10:10 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:12.825 23:10:10 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:12.825 23:10:10 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:12.825 23:10:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:12.825 23:10:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:12.825 23:10:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:12.825 23:10:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.825 23:10:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:12.825 23:10:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:12.825 23:10:10 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:12.825 No valid GPT data, bailing 00:04:12.825 23:10:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:12.825 23:10:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:12.825 23:10:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:12.825 23:10:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:12.825 23:10:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:12.825 23:10:10 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:12.825 23:10:10 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:12.825 23:10:10 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.825 23:10:10 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.825 23:10:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:12.825 ************************************ 00:04:12.825 START TEST nvme_mount 00:04:12.825 ************************************ 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:12.825 23:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:13.763 Creating new GPT entries in memory. 00:04:13.763 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:13.763 other utilities. 00:04:13.763 23:10:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:13.763 23:10:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:13.763 23:10:11 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:13.763 23:10:11 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:13.763 23:10:11 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:14.699 Creating new GPT entries in memory. 00:04:14.699 The operation has completed successfully. 00:04:14.699 23:10:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:14.699 23:10:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.699 23:10:12 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1240355 00:04:14.699 23:10:12 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.699 23:10:12 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:14.699 23:10:12 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.699 23:10:12 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:14.699 23:10:12 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:14.700 23:10:12 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.700 23:10:12 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.700 23:10:12 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:14.700 23:10:12 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:14.700 23:10:12 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.700 23:10:12 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.700 23:10:12 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:14.700 23:10:12 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:14.700 23:10:12 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:14.700 23:10:12 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:14.700 23:10:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.700 23:10:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:14.700 23:10:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:14.700 23:10:12 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.700 23:10:12 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:15.634 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.634 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:15.634 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:15.634 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.634 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.634 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.634 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.634 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.634 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.634 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.634 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.634 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.634 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.634 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.634 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.634 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:15.635 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.893 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.893 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:15.893 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.893 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:15.893 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.893 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:15.893 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.893 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.893 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.893 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:15.893 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:15.893 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.893 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:16.151 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:16.151 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:16.151 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:16.151 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:16.151 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:16.151 23:10:13 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:16.151 23:10:13 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.151 23:10:13 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:16.151 23:10:13 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:16.151 23:10:13 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.151 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.151 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:16.151 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:16.152 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.152 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.152 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:16.152 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:16.152 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:16.152 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:16.152 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.152 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:16.152 23:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:16.152 23:10:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.152 23:10:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:17.528 23:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.528 23:10:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:18.464 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.724 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:18.724 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:18.724 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:18.724 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:18.724 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.724 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.724 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.724 23:10:16 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:18.724 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:18.724 00:04:18.724 real 0m6.116s 00:04:18.724 user 0m1.427s 00:04:18.724 sys 0m2.238s 00:04:18.724 23:10:16 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.724 23:10:16 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:18.724 ************************************ 00:04:18.724 END TEST nvme_mount 00:04:18.724 ************************************ 00:04:18.724 23:10:16 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:18.724 23:10:16 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.724 23:10:16 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.724 23:10:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:18.724 ************************************ 00:04:18.724 START TEST dm_mount 00:04:18.724 ************************************ 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:18.724 23:10:16 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:19.661 Creating new GPT entries in memory. 00:04:19.661 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:19.661 other utilities. 00:04:19.661 23:10:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:19.661 23:10:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.661 23:10:17 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:19.661 23:10:17 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:19.661 23:10:17 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:20.596 Creating new GPT entries in memory. 00:04:20.596 The operation has completed successfully. 00:04:20.596 23:10:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:20.596 23:10:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.596 23:10:18 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:20.596 23:10:18 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:20.596 23:10:18 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:21.974 The operation has completed successfully. 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1242742 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:21.974 23:10:19 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:21.975 23:10:19 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.975 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:21.975 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:21.975 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:21.975 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.975 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:21.975 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:21.975 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:21.975 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:21.975 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:21.975 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.975 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:21.975 23:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:21.975 23:10:19 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.975 23:10:19 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:22.910 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:22.911 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.911 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:22.911 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:22.911 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:22.911 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:22.911 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:22.911 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:22.911 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:22.911 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:22.911 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.911 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:22.911 23:10:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:22.911 23:10:20 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.911 23:10:20 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.284 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.284 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:24.284 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:24.284 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.284 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.284 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.284 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.284 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.284 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.284 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.284 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.284 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.284 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.284 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:24.285 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:24.285 00:04:24.285 real 0m5.518s 00:04:24.285 user 0m0.922s 00:04:24.285 sys 0m1.462s 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.285 23:10:21 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:24.285 ************************************ 00:04:24.285 END TEST dm_mount 00:04:24.285 ************************************ 00:04:24.285 23:10:21 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:24.285 23:10:21 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:24.285 23:10:21 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.285 23:10:21 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.285 23:10:21 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:24.285 23:10:21 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.285 23:10:21 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.544 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:24.544 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:24.544 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:24.544 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:24.544 23:10:22 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:24.544 23:10:22 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.544 23:10:22 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:24.544 23:10:22 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.544 23:10:22 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:24.544 23:10:22 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.544 23:10:22 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:24.544 00:04:24.544 real 0m13.579s 00:04:24.544 user 0m2.990s 00:04:24.544 sys 0m4.764s 00:04:24.544 23:10:22 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.544 23:10:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:24.544 ************************************ 00:04:24.544 END TEST devices 00:04:24.544 ************************************ 00:04:24.544 00:04:24.544 real 0m42.787s 00:04:24.544 user 0m12.398s 00:04:24.544 sys 0m18.538s 00:04:24.544 23:10:22 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.544 23:10:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.544 ************************************ 00:04:24.544 END TEST setup.sh 00:04:24.544 ************************************ 00:04:24.544 23:10:22 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:25.918 Hugepages 00:04:25.918 node hugesize free / total 00:04:25.918 node0 1048576kB 0 / 0 00:04:25.918 node0 2048kB 2048 / 2048 00:04:25.918 node1 1048576kB 0 / 0 00:04:25.918 node1 2048kB 0 / 0 00:04:25.918 00:04:25.918 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:25.918 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:25.918 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:25.918 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:25.918 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:25.918 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:25.918 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:25.918 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:25.918 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:25.918 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:25.918 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:25.918 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:25.918 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:25.918 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:25.918 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:25.918 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:25.918 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:25.918 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:25.918 23:10:23 -- spdk/autotest.sh@130 -- # uname -s 00:04:25.918 23:10:23 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:25.918 23:10:23 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:25.918 23:10:23 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:26.852 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:26.852 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:26.852 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:26.852 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:26.852 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:26.852 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:26.852 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:26.852 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:26.852 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:26.852 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:27.110 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:27.110 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:27.110 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:27.110 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:27.110 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:27.110 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:28.047 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:28.047 23:10:25 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:28.982 23:10:26 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:28.982 23:10:26 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:28.982 23:10:26 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:28.982 23:10:26 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:28.982 23:10:26 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:28.982 23:10:26 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:28.982 23:10:26 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:28.982 23:10:26 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:28.982 23:10:26 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:29.240 23:10:26 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:29.240 23:10:26 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:29.240 23:10:26 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.175 Waiting for block devices as requested 00:04:30.175 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:30.434 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:30.434 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:30.434 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:30.434 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:30.692 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:30.692 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:30.692 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:30.692 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:30.975 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:30.975 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:30.975 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:30.975 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:31.236 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:31.236 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:31.236 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:31.236 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:31.495 23:10:29 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:31.495 23:10:29 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:31.495 23:10:29 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:31.495 23:10:29 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:04:31.495 23:10:29 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:31.495 23:10:29 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:31.495 23:10:29 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:31.495 23:10:29 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:31.495 23:10:29 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:31.495 23:10:29 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:31.495 23:10:29 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:31.495 23:10:29 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:31.495 23:10:29 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:31.495 23:10:29 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:31.495 23:10:29 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:31.495 23:10:29 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:31.495 23:10:29 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:31.495 23:10:29 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:31.495 23:10:29 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:31.495 23:10:29 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:31.495 23:10:29 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:31.495 23:10:29 -- common/autotest_common.sh@1557 -- # continue 00:04:31.495 23:10:29 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:31.495 23:10:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:31.495 23:10:29 -- common/autotest_common.sh@10 -- # set +x 00:04:31.495 23:10:29 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:31.495 23:10:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:31.495 23:10:29 -- common/autotest_common.sh@10 -- # set +x 00:04:31.495 23:10:29 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:32.872 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:32.872 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:32.872 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:32.872 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:32.872 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:32.872 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:32.872 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:32.872 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:32.872 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:32.872 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:32.872 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:32.872 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:32.872 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:32.872 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:32.872 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:32.872 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:33.808 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:33.808 23:10:31 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:33.808 23:10:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:33.808 23:10:31 -- common/autotest_common.sh@10 -- # set +x 00:04:33.808 23:10:31 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:33.808 23:10:31 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:33.808 23:10:31 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:33.808 23:10:31 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:33.808 23:10:31 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:33.808 23:10:31 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:33.808 23:10:31 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:33.808 23:10:31 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:33.808 23:10:31 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:33.808 23:10:31 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:33.808 23:10:31 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:33.808 23:10:31 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:33.808 23:10:31 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:33.808 23:10:31 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:33.808 23:10:31 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:33.808 23:10:31 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:33.808 23:10:31 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:33.808 23:10:31 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:33.808 23:10:31 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:04:33.808 23:10:31 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:04:33.808 23:10:31 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1247916 00:04:33.808 23:10:31 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:33.808 23:10:31 -- common/autotest_common.sh@1598 -- # waitforlisten 1247916 00:04:33.808 23:10:31 -- common/autotest_common.sh@831 -- # '[' -z 1247916 ']' 00:04:33.808 23:10:31 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.808 23:10:31 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:33.808 23:10:31 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.808 23:10:31 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:33.808 23:10:31 -- common/autotest_common.sh@10 -- # set +x 00:04:34.067 [2024-07-25 23:10:31.581110] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:04:34.067 [2024-07-25 23:10:31.581189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247916 ] 00:04:34.067 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.067 [2024-07-25 23:10:31.613885] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:34.067 [2024-07-25 23:10:31.646138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.067 [2024-07-25 23:10:31.735743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.325 23:10:31 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:34.325 23:10:31 -- common/autotest_common.sh@864 -- # return 0 00:04:34.325 23:10:31 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:34.325 23:10:31 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:34.325 23:10:31 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:37.607 nvme0n1 00:04:37.607 23:10:35 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:37.607 [2024-07-25 23:10:35.306940] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:37.607 [2024-07-25 23:10:35.306991] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:37.607 request: 00:04:37.607 { 00:04:37.607 "nvme_ctrlr_name": "nvme0", 00:04:37.607 "password": "test", 00:04:37.607 "method": "bdev_nvme_opal_revert", 00:04:37.607 "req_id": 1 00:04:37.607 } 00:04:37.607 Got JSON-RPC error response 00:04:37.607 response: 00:04:37.607 { 00:04:37.607 "code": -32603, 00:04:37.607 "message": "Internal error" 00:04:37.607 } 00:04:37.607 23:10:35 -- common/autotest_common.sh@1604 -- # true 00:04:37.607 23:10:35 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:37.607 23:10:35 -- common/autotest_common.sh@1608 -- # killprocess 1247916 00:04:37.607 23:10:35 -- common/autotest_common.sh@950 -- # '[' -z 1247916 ']' 00:04:37.607 23:10:35 -- common/autotest_common.sh@954 -- # kill -0 1247916 00:04:37.607 23:10:35 -- common/autotest_common.sh@955 -- # uname 00:04:37.607 23:10:35 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.607 23:10:35 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1247916 00:04:37.866 23:10:35 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:37.866 23:10:35 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:37.866 23:10:35 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1247916' 00:04:37.866 killing process with pid 1247916 00:04:37.866 23:10:35 -- common/autotest_common.sh@969 -- # kill 1247916 00:04:37.866 23:10:35 -- common/autotest_common.sh@974 -- # wait 1247916 00:04:39.765 23:10:37 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:39.765 23:10:37 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:39.765 23:10:37 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:39.765 23:10:37 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:39.765 23:10:37 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:39.765 23:10:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:39.765 23:10:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.765 23:10:37 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:39.765 23:10:37 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:39.765 23:10:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.765 23:10:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.765 23:10:37 -- common/autotest_common.sh@10 -- # set +x 00:04:39.765 ************************************ 00:04:39.765 START TEST env 00:04:39.765 ************************************ 00:04:39.765 23:10:37 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:39.765 * Looking for test storage... 00:04:39.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:39.765 23:10:37 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:39.765 23:10:37 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.765 23:10:37 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.765 23:10:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.765 ************************************ 00:04:39.765 START TEST env_memory 00:04:39.765 ************************************ 00:04:39.765 23:10:37 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:39.765 00:04:39.765 00:04:39.765 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.765 http://cunit.sourceforge.net/ 00:04:39.765 00:04:39.765 00:04:39.765 Suite: memory 00:04:39.765 Test: alloc and free memory map ...[2024-07-25 23:10:37.287283] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:39.765 passed 00:04:39.765 Test: mem map translation ...[2024-07-25 23:10:37.311915] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:39.765 [2024-07-25 23:10:37.311941] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:39.765 [2024-07-25 23:10:37.311993] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:39.765 [2024-07-25 23:10:37.312008] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:39.765 passed 00:04:39.765 Test: mem map registration ...[2024-07-25 23:10:37.364189] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:39.765 [2024-07-25 23:10:37.364213] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:39.765 passed 00:04:39.765 Test: mem map adjacent registrations ...passed 00:04:39.765 00:04:39.765 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.765 suites 1 1 n/a 0 0 00:04:39.765 tests 4 4 4 0 0 00:04:39.765 asserts 152 152 152 0 n/a 00:04:39.765 00:04:39.765 Elapsed time = 0.174 seconds 00:04:39.765 00:04:39.765 real 0m0.182s 00:04:39.765 user 0m0.174s 00:04:39.765 sys 0m0.007s 00:04:39.765 23:10:37 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.765 23:10:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:39.765 ************************************ 00:04:39.765 END TEST env_memory 00:04:39.765 ************************************ 00:04:39.766 23:10:37 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:39.766 23:10:37 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.766 23:10:37 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.766 23:10:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.766 ************************************ 00:04:39.766 START TEST env_vtophys 00:04:39.766 ************************************ 00:04:39.766 23:10:37 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:40.025 EAL: lib.eal log level changed from notice to debug 00:04:40.025 EAL: Detected lcore 0 as core 0 on socket 0 00:04:40.025 EAL: Detected lcore 1 as core 1 on socket 0 00:04:40.025 EAL: Detected lcore 2 as core 2 on socket 0 00:04:40.025 EAL: Detected lcore 3 as core 3 on socket 0 00:04:40.025 EAL: Detected lcore 4 as core 4 on socket 0 00:04:40.025 EAL: Detected lcore 5 as core 5 on socket 0 00:04:40.025 EAL: Detected lcore 6 as core 8 on socket 0 00:04:40.025 EAL: Detected lcore 7 as core 9 on socket 0 00:04:40.025 EAL: Detected lcore 8 as core 10 on socket 0 00:04:40.025 EAL: Detected lcore 9 as core 11 on socket 0 00:04:40.025 EAL: Detected lcore 10 as core 12 on socket 0 00:04:40.025 EAL: Detected lcore 11 as core 13 on socket 0 00:04:40.025 EAL: Detected lcore 12 as core 0 on socket 1 00:04:40.025 EAL: Detected lcore 13 as core 1 on socket 1 00:04:40.025 EAL: Detected lcore 14 as core 2 on socket 1 00:04:40.025 EAL: Detected lcore 15 as core 3 on socket 1 00:04:40.025 EAL: Detected lcore 16 as core 4 on socket 1 00:04:40.025 EAL: Detected lcore 17 as core 5 on socket 1 00:04:40.025 EAL: Detected lcore 18 as core 8 on socket 1 00:04:40.025 EAL: Detected lcore 19 as core 9 on socket 1 00:04:40.025 EAL: Detected lcore 20 as core 10 on socket 1 00:04:40.025 EAL: Detected lcore 21 as core 11 on socket 1 00:04:40.025 EAL: Detected lcore 22 as core 12 on socket 1 00:04:40.025 EAL: Detected lcore 23 as core 13 on socket 1 00:04:40.025 EAL: Detected lcore 24 as core 0 on socket 0 00:04:40.025 EAL: Detected lcore 25 as core 1 on socket 0 00:04:40.025 EAL: Detected lcore 26 as core 2 on socket 0 00:04:40.025 EAL: Detected lcore 27 as core 3 on socket 0 00:04:40.025 EAL: Detected lcore 28 as core 4 on socket 0 00:04:40.025 EAL: Detected lcore 29 as core 5 on socket 0 00:04:40.025 EAL: Detected lcore 30 as core 8 on socket 0 00:04:40.025 EAL: Detected lcore 31 as core 9 on socket 0 00:04:40.025 EAL: Detected lcore 32 as core 10 on socket 0 00:04:40.025 EAL: Detected lcore 33 as core 11 on socket 0 00:04:40.025 EAL: Detected lcore 34 as core 12 on socket 0 00:04:40.025 EAL: Detected lcore 35 as core 13 on socket 0 00:04:40.025 EAL: Detected lcore 36 as core 0 on socket 1 00:04:40.025 EAL: Detected lcore 37 as core 1 on socket 1 00:04:40.025 EAL: Detected lcore 38 as core 2 on socket 1 00:04:40.025 EAL: Detected lcore 39 as core 3 on socket 1 00:04:40.025 EAL: Detected lcore 40 as core 4 on socket 1 00:04:40.025 EAL: Detected lcore 41 as core 5 on socket 1 00:04:40.025 EAL: Detected lcore 42 as core 8 on socket 1 00:04:40.025 EAL: Detected lcore 43 as core 9 on socket 1 00:04:40.025 EAL: Detected lcore 44 as core 10 on socket 1 00:04:40.025 EAL: Detected lcore 45 as core 11 on socket 1 00:04:40.025 EAL: Detected lcore 46 as core 12 on socket 1 00:04:40.025 EAL: Detected lcore 47 as core 13 on socket 1 00:04:40.025 EAL: Maximum logical cores by configuration: 128 00:04:40.025 EAL: Detected CPU lcores: 48 00:04:40.025 EAL: Detected NUMA nodes: 2 00:04:40.025 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:04:40.025 EAL: Detected shared linkage of DPDK 00:04:40.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:04:40.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:04:40.025 EAL: Registered [vdev] bus. 00:04:40.025 EAL: bus.vdev log level changed from disabled to notice 00:04:40.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:04:40.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:04:40.025 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:40.025 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:40.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:04:40.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:04:40.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:04:40.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:04:40.025 EAL: No shared files mode enabled, IPC will be disabled 00:04:40.025 EAL: No shared files mode enabled, IPC is disabled 00:04:40.025 EAL: Bus pci wants IOVA as 'DC' 00:04:40.025 EAL: Bus vdev wants IOVA as 'DC' 00:04:40.025 EAL: Buses did not request a specific IOVA mode. 00:04:40.025 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:40.025 EAL: Selected IOVA mode 'VA' 00:04:40.025 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.025 EAL: Probing VFIO support... 00:04:40.025 EAL: IOMMU type 1 (Type 1) is supported 00:04:40.025 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:40.025 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:40.025 EAL: VFIO support initialized 00:04:40.025 EAL: Ask a virtual area of 0x2e000 bytes 00:04:40.025 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:40.025 EAL: Setting up physically contiguous memory... 00:04:40.025 EAL: Setting maximum number of open files to 524288 00:04:40.025 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:40.025 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:40.025 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:40.025 EAL: Ask a virtual area of 0x61000 bytes 00:04:40.025 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:40.025 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:40.025 EAL: Ask a virtual area of 0x400000000 bytes 00:04:40.025 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:40.025 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:40.025 EAL: Ask a virtual area of 0x61000 bytes 00:04:40.025 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:40.025 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:40.025 EAL: Ask a virtual area of 0x400000000 bytes 00:04:40.025 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:40.025 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:40.025 EAL: Ask a virtual area of 0x61000 bytes 00:04:40.025 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:40.025 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:40.025 EAL: Ask a virtual area of 0x400000000 bytes 00:04:40.025 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:40.025 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:40.025 EAL: Ask a virtual area of 0x61000 bytes 00:04:40.025 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:40.025 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:40.025 EAL: Ask a virtual area of 0x400000000 bytes 00:04:40.025 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:40.025 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:40.025 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:40.025 EAL: Ask a virtual area of 0x61000 bytes 00:04:40.025 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:40.025 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:40.025 EAL: Ask a virtual area of 0x400000000 bytes 00:04:40.025 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:40.025 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:40.025 EAL: Ask a virtual area of 0x61000 bytes 00:04:40.025 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:40.025 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:40.025 EAL: Ask a virtual area of 0x400000000 bytes 00:04:40.025 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:40.025 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:40.025 EAL: Ask a virtual area of 0x61000 bytes 00:04:40.025 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:40.025 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:40.025 EAL: Ask a virtual area of 0x400000000 bytes 00:04:40.025 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:40.025 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:40.025 EAL: Ask a virtual area of 0x61000 bytes 00:04:40.025 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:40.025 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:40.025 EAL: Ask a virtual area of 0x400000000 bytes 00:04:40.025 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:40.025 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:40.025 EAL: Hugepages will be freed exactly as allocated. 00:04:40.025 EAL: No shared files mode enabled, IPC is disabled 00:04:40.025 EAL: No shared files mode enabled, IPC is disabled 00:04:40.025 EAL: TSC frequency is ~2700000 KHz 00:04:40.025 EAL: Main lcore 0 is ready (tid=7f5ef61f5a00;cpuset=[0]) 00:04:40.025 EAL: Trying to obtain current memory policy. 00:04:40.025 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.025 EAL: Restoring previous memory policy: 0 00:04:40.025 EAL: request: mp_malloc_sync 00:04:40.025 EAL: No shared files mode enabled, IPC is disabled 00:04:40.025 EAL: Heap on socket 0 was expanded by 2MB 00:04:40.025 EAL: No shared files mode enabled, IPC is disabled 00:04:40.025 EAL: No shared files mode enabled, IPC is disabled 00:04:40.025 EAL: Mem event callback 'spdk:(nil)' registered 00:04:40.025 00:04:40.025 00:04:40.025 CUnit - A unit testing framework for C - Version 2.1-3 00:04:40.025 http://cunit.sourceforge.net/ 00:04:40.025 00:04:40.025 00:04:40.025 Suite: components_suite 00:04:40.025 Test: vtophys_malloc_test ...passed 00:04:40.025 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:40.025 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.026 EAL: Restoring previous memory policy: 4 00:04:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.026 EAL: request: mp_malloc_sync 00:04:40.026 EAL: No shared files mode enabled, IPC is disabled 00:04:40.026 EAL: Heap on socket 0 was expanded by 4MB 00:04:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.026 EAL: request: mp_malloc_sync 00:04:40.026 EAL: No shared files mode enabled, IPC is disabled 00:04:40.026 EAL: Heap on socket 0 was shrunk by 4MB 00:04:40.026 EAL: Trying to obtain current memory policy. 00:04:40.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.026 EAL: Restoring previous memory policy: 4 00:04:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.026 EAL: request: mp_malloc_sync 00:04:40.026 EAL: No shared files mode enabled, IPC is disabled 00:04:40.026 EAL: Heap on socket 0 was expanded by 6MB 00:04:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.026 EAL: request: mp_malloc_sync 00:04:40.026 EAL: No shared files mode enabled, IPC is disabled 00:04:40.026 EAL: Heap on socket 0 was shrunk by 6MB 00:04:40.026 EAL: Trying to obtain current memory policy. 00:04:40.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.026 EAL: Restoring previous memory policy: 4 00:04:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.026 EAL: request: mp_malloc_sync 00:04:40.026 EAL: No shared files mode enabled, IPC is disabled 00:04:40.026 EAL: Heap on socket 0 was expanded by 10MB 00:04:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.026 EAL: request: mp_malloc_sync 00:04:40.026 EAL: No shared files mode enabled, IPC is disabled 00:04:40.026 EAL: Heap on socket 0 was shrunk by 10MB 00:04:40.026 EAL: Trying to obtain current memory policy. 00:04:40.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.026 EAL: Restoring previous memory policy: 4 00:04:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.026 EAL: request: mp_malloc_sync 00:04:40.026 EAL: No shared files mode enabled, IPC is disabled 00:04:40.026 EAL: Heap on socket 0 was expanded by 18MB 00:04:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.026 EAL: request: mp_malloc_sync 00:04:40.026 EAL: No shared files mode enabled, IPC is disabled 00:04:40.026 EAL: Heap on socket 0 was shrunk by 18MB 00:04:40.026 EAL: Trying to obtain current memory policy. 00:04:40.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.026 EAL: Restoring previous memory policy: 4 00:04:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.026 EAL: request: mp_malloc_sync 00:04:40.026 EAL: No shared files mode enabled, IPC is disabled 00:04:40.026 EAL: Heap on socket 0 was expanded by 34MB 00:04:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.026 EAL: request: mp_malloc_sync 00:04:40.026 EAL: No shared files mode enabled, IPC is disabled 00:04:40.026 EAL: Heap on socket 0 was shrunk by 34MB 00:04:40.026 EAL: Trying to obtain current memory policy. 00:04:40.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.026 EAL: Restoring previous memory policy: 4 00:04:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.026 EAL: request: mp_malloc_sync 00:04:40.026 EAL: No shared files mode enabled, IPC is disabled 00:04:40.026 EAL: Heap on socket 0 was expanded by 66MB 00:04:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.026 EAL: request: mp_malloc_sync 00:04:40.026 EAL: No shared files mode enabled, IPC is disabled 00:04:40.026 EAL: Heap on socket 0 was shrunk by 66MB 00:04:40.026 EAL: Trying to obtain current memory policy. 00:04:40.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.026 EAL: Restoring previous memory policy: 4 00:04:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.026 EAL: request: mp_malloc_sync 00:04:40.026 EAL: No shared files mode enabled, IPC is disabled 00:04:40.026 EAL: Heap on socket 0 was expanded by 130MB 00:04:40.026 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.026 EAL: request: mp_malloc_sync 00:04:40.026 EAL: No shared files mode enabled, IPC is disabled 00:04:40.026 EAL: Heap on socket 0 was shrunk by 130MB 00:04:40.026 EAL: Trying to obtain current memory policy. 00:04:40.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.284 EAL: Restoring previous memory policy: 4 00:04:40.284 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.284 EAL: request: mp_malloc_sync 00:04:40.284 EAL: No shared files mode enabled, IPC is disabled 00:04:40.284 EAL: Heap on socket 0 was expanded by 258MB 00:04:40.284 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.284 EAL: request: mp_malloc_sync 00:04:40.284 EAL: No shared files mode enabled, IPC is disabled 00:04:40.284 EAL: Heap on socket 0 was shrunk by 258MB 00:04:40.285 EAL: Trying to obtain current memory policy. 00:04:40.285 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.542 EAL: Restoring previous memory policy: 4 00:04:40.542 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.542 EAL: request: mp_malloc_sync 00:04:40.542 EAL: No shared files mode enabled, IPC is disabled 00:04:40.542 EAL: Heap on socket 0 was expanded by 514MB 00:04:40.542 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.542 EAL: request: mp_malloc_sync 00:04:40.542 EAL: No shared files mode enabled, IPC is disabled 00:04:40.542 EAL: Heap on socket 0 was shrunk by 514MB 00:04:40.542 EAL: Trying to obtain current memory policy. 00:04:40.542 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.800 EAL: Restoring previous memory policy: 4 00:04:41.058 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.058 EAL: request: mp_malloc_sync 00:04:41.058 EAL: No shared files mode enabled, IPC is disabled 00:04:41.058 EAL: Heap on socket 0 was expanded by 1026MB 00:04:41.058 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.316 EAL: request: mp_malloc_sync 00:04:41.316 EAL: No shared files mode enabled, IPC is disabled 00:04:41.316 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:41.316 passed 00:04:41.316 00:04:41.316 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.316 suites 1 1 n/a 0 0 00:04:41.316 tests 2 2 2 0 0 00:04:41.316 asserts 497 497 497 0 n/a 00:04:41.316 00:04:41.316 Elapsed time = 1.379 seconds 00:04:41.316 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.316 EAL: request: mp_malloc_sync 00:04:41.316 EAL: No shared files mode enabled, IPC is disabled 00:04:41.316 EAL: Heap on socket 0 was shrunk by 2MB 00:04:41.316 EAL: No shared files mode enabled, IPC is disabled 00:04:41.316 EAL: No shared files mode enabled, IPC is disabled 00:04:41.316 EAL: No shared files mode enabled, IPC is disabled 00:04:41.316 00:04:41.316 real 0m1.493s 00:04:41.316 user 0m0.860s 00:04:41.316 sys 0m0.601s 00:04:41.316 23:10:38 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.316 23:10:38 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:41.316 ************************************ 00:04:41.316 END TEST env_vtophys 00:04:41.316 ************************************ 00:04:41.316 23:10:38 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:41.316 23:10:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.316 23:10:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.316 23:10:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.316 ************************************ 00:04:41.316 START TEST env_pci 00:04:41.316 ************************************ 00:04:41.316 23:10:39 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:41.316 00:04:41.316 00:04:41.316 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.316 http://cunit.sourceforge.net/ 00:04:41.316 00:04:41.316 00:04:41.316 Suite: pci 00:04:41.316 Test: pci_hook ...[2024-07-25 23:10:39.029172] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1248802 has claimed it 00:04:41.574 EAL: Cannot find device (10000:00:01.0) 00:04:41.574 EAL: Failed to attach device on primary process 00:04:41.574 passed 00:04:41.574 00:04:41.574 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.574 suites 1 1 n/a 0 0 00:04:41.574 tests 1 1 1 0 0 00:04:41.574 asserts 25 25 25 0 n/a 00:04:41.574 00:04:41.574 Elapsed time = 0.021 seconds 00:04:41.574 00:04:41.574 real 0m0.033s 00:04:41.574 user 0m0.010s 00:04:41.574 sys 0m0.023s 00:04:41.574 23:10:39 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.574 23:10:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:41.574 ************************************ 00:04:41.574 END TEST env_pci 00:04:41.574 ************************************ 00:04:41.574 23:10:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:41.574 23:10:39 env -- env/env.sh@15 -- # uname 00:04:41.575 23:10:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:41.575 23:10:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:41.575 23:10:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:41.575 23:10:39 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:41.575 23:10:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.575 23:10:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.575 ************************************ 00:04:41.575 START TEST env_dpdk_post_init 00:04:41.575 ************************************ 00:04:41.575 23:10:39 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:41.575 EAL: Detected CPU lcores: 48 00:04:41.575 EAL: Detected NUMA nodes: 2 00:04:41.575 EAL: Detected shared linkage of DPDK 00:04:41.575 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.575 EAL: Selected IOVA mode 'VA' 00:04:41.575 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.575 EAL: VFIO support initialized 00:04:41.575 EAL: Using IOMMU type 1 (Type 1) 00:04:45.757 Starting DPDK initialization... 00:04:45.757 Starting SPDK post initialization... 00:04:45.757 SPDK NVMe probe 00:04:45.757 Attaching to 0000:88:00.0 00:04:45.757 Attached to 0000:88:00.0 00:04:45.757 Cleaning up... 00:04:45.757 00:04:45.757 real 0m4.381s 00:04:45.757 user 0m3.263s 00:04:45.757 sys 0m0.182s 00:04:45.757 23:10:43 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.757 23:10:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:45.757 ************************************ 00:04:45.757 END TEST env_dpdk_post_init 00:04:45.757 ************************************ 00:04:46.015 23:10:43 env -- env/env.sh@26 -- # uname 00:04:46.015 23:10:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:46.015 23:10:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:46.015 23:10:43 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.015 23:10:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.015 23:10:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.015 ************************************ 00:04:46.015 START TEST env_mem_callbacks 00:04:46.016 ************************************ 00:04:46.016 23:10:43 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:46.016 EAL: Detected CPU lcores: 48 00:04:46.016 EAL: Detected NUMA nodes: 2 00:04:46.016 EAL: Detected shared linkage of DPDK 00:04:46.016 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:46.016 EAL: Selected IOVA mode 'VA' 00:04:46.016 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.016 EAL: VFIO support initialized 00:04:46.016 00:04:46.016 00:04:46.016 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.016 http://cunit.sourceforge.net/ 00:04:46.016 00:04:46.016 00:04:46.016 Suite: memory 00:04:46.016 Test: test ... 00:04:46.016 register 0x200000200000 2097152 00:04:46.016 malloc 3145728 00:04:46.016 register 0x200000400000 4194304 00:04:46.016 buf 0x200000500000 len 3145728 PASSED 00:04:46.016 malloc 64 00:04:46.016 buf 0x2000004fff40 len 64 PASSED 00:04:46.016 malloc 4194304 00:04:46.016 register 0x200000800000 6291456 00:04:46.016 buf 0x200000a00000 len 4194304 PASSED 00:04:46.016 free 0x200000500000 3145728 00:04:46.016 free 0x2000004fff40 64 00:04:46.016 unregister 0x200000400000 4194304 PASSED 00:04:46.016 free 0x200000a00000 4194304 00:04:46.016 unregister 0x200000800000 6291456 PASSED 00:04:46.016 malloc 8388608 00:04:46.016 register 0x200000400000 10485760 00:04:46.016 buf 0x200000600000 len 8388608 PASSED 00:04:46.016 free 0x200000600000 8388608 00:04:46.016 unregister 0x200000400000 10485760 PASSED 00:04:46.016 passed 00:04:46.016 00:04:46.016 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.016 suites 1 1 n/a 0 0 00:04:46.016 tests 1 1 1 0 0 00:04:46.016 asserts 15 15 15 0 n/a 00:04:46.016 00:04:46.016 Elapsed time = 0.005 seconds 00:04:46.016 00:04:46.016 real 0m0.050s 00:04:46.016 user 0m0.017s 00:04:46.016 sys 0m0.033s 00:04:46.016 23:10:43 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.016 23:10:43 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:46.016 ************************************ 00:04:46.016 END TEST env_mem_callbacks 00:04:46.016 ************************************ 00:04:46.016 00:04:46.016 real 0m6.427s 00:04:46.016 user 0m4.444s 00:04:46.016 sys 0m1.031s 00:04:46.016 23:10:43 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.016 23:10:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.016 ************************************ 00:04:46.016 END TEST env 00:04:46.016 ************************************ 00:04:46.016 23:10:43 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:46.016 23:10:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.016 23:10:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.016 23:10:43 -- common/autotest_common.sh@10 -- # set +x 00:04:46.016 ************************************ 00:04:46.016 START TEST rpc 00:04:46.016 ************************************ 00:04:46.016 23:10:43 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:46.016 * Looking for test storage... 00:04:46.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:46.016 23:10:43 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1249486 00:04:46.016 23:10:43 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:46.016 23:10:43 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.016 23:10:43 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1249486 00:04:46.016 23:10:43 rpc -- common/autotest_common.sh@831 -- # '[' -z 1249486 ']' 00:04:46.016 23:10:43 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.016 23:10:43 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.016 23:10:43 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.016 23:10:43 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.016 23:10:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.274 [2024-07-25 23:10:43.748174] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:04:46.274 [2024-07-25 23:10:43.748274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249486 ] 00:04:46.274 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.274 [2024-07-25 23:10:43.781016] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:46.274 [2024-07-25 23:10:43.812031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.274 [2024-07-25 23:10:43.903237] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:46.274 [2024-07-25 23:10:43.903301] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1249486' to capture a snapshot of events at runtime. 00:04:46.274 [2024-07-25 23:10:43.903317] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:46.274 [2024-07-25 23:10:43.903330] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:46.274 [2024-07-25 23:10:43.903352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1249486 for offline analysis/debug. 00:04:46.274 [2024-07-25 23:10:43.903382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.532 23:10:44 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.532 23:10:44 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:46.532 23:10:44 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:46.532 23:10:44 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:46.532 23:10:44 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:46.532 23:10:44 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:46.532 23:10:44 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.532 23:10:44 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.532 23:10:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.532 ************************************ 00:04:46.532 START TEST rpc_integrity 00:04:46.532 ************************************ 00:04:46.532 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:46.532 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:46.532 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.532 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.532 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.532 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:46.532 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:46.532 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:46.532 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:46.532 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.532 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.532 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.532 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:46.532 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:46.532 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.532 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.532 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.532 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:46.532 { 00:04:46.532 "name": "Malloc0", 00:04:46.532 "aliases": [ 00:04:46.532 "0be42be8-8213-46a4-9cb6-218f9b979387" 00:04:46.532 ], 00:04:46.532 "product_name": "Malloc disk", 00:04:46.532 "block_size": 512, 00:04:46.532 "num_blocks": 16384, 00:04:46.532 "uuid": "0be42be8-8213-46a4-9cb6-218f9b979387", 00:04:46.532 "assigned_rate_limits": { 00:04:46.532 "rw_ios_per_sec": 0, 00:04:46.532 "rw_mbytes_per_sec": 0, 00:04:46.532 "r_mbytes_per_sec": 0, 00:04:46.532 "w_mbytes_per_sec": 0 00:04:46.532 }, 00:04:46.532 "claimed": false, 00:04:46.532 "zoned": false, 00:04:46.532 "supported_io_types": { 00:04:46.532 "read": true, 00:04:46.532 "write": true, 00:04:46.532 "unmap": true, 00:04:46.532 "flush": true, 00:04:46.532 "reset": true, 00:04:46.532 "nvme_admin": false, 00:04:46.532 "nvme_io": false, 00:04:46.532 "nvme_io_md": false, 00:04:46.532 "write_zeroes": true, 00:04:46.532 "zcopy": true, 00:04:46.532 "get_zone_info": false, 00:04:46.532 "zone_management": false, 00:04:46.532 "zone_append": false, 00:04:46.532 "compare": false, 00:04:46.532 "compare_and_write": false, 00:04:46.532 "abort": true, 00:04:46.532 "seek_hole": false, 00:04:46.532 "seek_data": false, 00:04:46.532 "copy": true, 00:04:46.532 "nvme_iov_md": false 00:04:46.532 }, 00:04:46.532 "memory_domains": [ 00:04:46.532 { 00:04:46.532 "dma_device_id": "system", 00:04:46.532 "dma_device_type": 1 00:04:46.532 }, 00:04:46.532 { 00:04:46.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.532 "dma_device_type": 2 00:04:46.532 } 00:04:46.532 ], 00:04:46.532 "driver_specific": {} 00:04:46.532 } 00:04:46.532 ]' 00:04:46.532 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:46.790 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:46.790 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:46.790 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.790 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.790 [2024-07-25 23:10:44.295291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:46.790 [2024-07-25 23:10:44.295331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:46.790 [2024-07-25 23:10:44.295381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ce87f0 00:04:46.790 [2024-07-25 23:10:44.295395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:46.790 [2024-07-25 23:10:44.297077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:46.790 [2024-07-25 23:10:44.297104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:46.790 Passthru0 00:04:46.790 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.790 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:46.790 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.790 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.790 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.790 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:46.790 { 00:04:46.790 "name": "Malloc0", 00:04:46.790 "aliases": [ 00:04:46.790 "0be42be8-8213-46a4-9cb6-218f9b979387" 00:04:46.790 ], 00:04:46.790 "product_name": "Malloc disk", 00:04:46.790 "block_size": 512, 00:04:46.790 "num_blocks": 16384, 00:04:46.790 "uuid": "0be42be8-8213-46a4-9cb6-218f9b979387", 00:04:46.790 "assigned_rate_limits": { 00:04:46.790 "rw_ios_per_sec": 0, 00:04:46.790 "rw_mbytes_per_sec": 0, 00:04:46.790 "r_mbytes_per_sec": 0, 00:04:46.790 "w_mbytes_per_sec": 0 00:04:46.790 }, 00:04:46.790 "claimed": true, 00:04:46.790 "claim_type": "exclusive_write", 00:04:46.790 "zoned": false, 00:04:46.790 "supported_io_types": { 00:04:46.790 "read": true, 00:04:46.790 "write": true, 00:04:46.790 "unmap": true, 00:04:46.790 "flush": true, 00:04:46.790 "reset": true, 00:04:46.790 "nvme_admin": false, 00:04:46.790 "nvme_io": false, 00:04:46.790 "nvme_io_md": false, 00:04:46.790 "write_zeroes": true, 00:04:46.790 "zcopy": true, 00:04:46.790 "get_zone_info": false, 00:04:46.790 "zone_management": false, 00:04:46.790 "zone_append": false, 00:04:46.790 "compare": false, 00:04:46.790 "compare_and_write": false, 00:04:46.790 "abort": true, 00:04:46.790 "seek_hole": false, 00:04:46.790 "seek_data": false, 00:04:46.790 "copy": true, 00:04:46.790 "nvme_iov_md": false 00:04:46.790 }, 00:04:46.790 "memory_domains": [ 00:04:46.790 { 00:04:46.790 "dma_device_id": "system", 00:04:46.790 "dma_device_type": 1 00:04:46.790 }, 00:04:46.790 { 00:04:46.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.790 "dma_device_type": 2 00:04:46.790 } 00:04:46.790 ], 00:04:46.790 "driver_specific": {} 00:04:46.790 }, 00:04:46.790 { 00:04:46.790 "name": "Passthru0", 00:04:46.790 "aliases": [ 00:04:46.790 "067f17ab-1e6c-5c47-9306-a7b6b6464b1b" 00:04:46.790 ], 00:04:46.790 "product_name": "passthru", 00:04:46.790 "block_size": 512, 00:04:46.790 "num_blocks": 16384, 00:04:46.790 "uuid": "067f17ab-1e6c-5c47-9306-a7b6b6464b1b", 00:04:46.790 "assigned_rate_limits": { 00:04:46.790 "rw_ios_per_sec": 0, 00:04:46.790 "rw_mbytes_per_sec": 0, 00:04:46.790 "r_mbytes_per_sec": 0, 00:04:46.790 "w_mbytes_per_sec": 0 00:04:46.790 }, 00:04:46.790 "claimed": false, 00:04:46.790 "zoned": false, 00:04:46.790 "supported_io_types": { 00:04:46.790 "read": true, 00:04:46.790 "write": true, 00:04:46.790 "unmap": true, 00:04:46.790 "flush": true, 00:04:46.790 "reset": true, 00:04:46.790 "nvme_admin": false, 00:04:46.791 "nvme_io": false, 00:04:46.791 "nvme_io_md": false, 00:04:46.791 "write_zeroes": true, 00:04:46.791 "zcopy": true, 00:04:46.791 "get_zone_info": false, 00:04:46.791 "zone_management": false, 00:04:46.791 "zone_append": false, 00:04:46.791 "compare": false, 00:04:46.791 "compare_and_write": false, 00:04:46.791 "abort": true, 00:04:46.791 "seek_hole": false, 00:04:46.791 "seek_data": false, 00:04:46.791 "copy": true, 00:04:46.791 "nvme_iov_md": false 00:04:46.791 }, 00:04:46.791 "memory_domains": [ 00:04:46.791 { 00:04:46.791 "dma_device_id": "system", 00:04:46.791 "dma_device_type": 1 00:04:46.791 }, 00:04:46.791 { 00:04:46.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.791 "dma_device_type": 2 00:04:46.791 } 00:04:46.791 ], 00:04:46.791 "driver_specific": { 00:04:46.791 "passthru": { 00:04:46.791 "name": "Passthru0", 00:04:46.791 "base_bdev_name": "Malloc0" 00:04:46.791 } 00:04:46.791 } 00:04:46.791 } 00:04:46.791 ]' 00:04:46.791 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:46.791 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:46.791 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:46.791 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.791 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.791 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.791 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:46.791 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.791 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.791 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.791 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.791 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.791 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.791 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.791 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.791 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.791 23:10:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.791 00:04:46.791 real 0m0.231s 00:04:46.791 user 0m0.148s 00:04:46.791 sys 0m0.025s 00:04:46.791 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.791 23:10:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.791 ************************************ 00:04:46.791 END TEST rpc_integrity 00:04:46.791 ************************************ 00:04:46.791 23:10:44 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:46.791 23:10:44 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.791 23:10:44 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.791 23:10:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.791 ************************************ 00:04:46.791 START TEST rpc_plugins 00:04:46.791 ************************************ 00:04:46.791 23:10:44 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:46.791 23:10:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:46.791 23:10:44 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.791 23:10:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.791 23:10:44 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.791 23:10:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:46.791 23:10:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:46.791 23:10:44 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.791 23:10:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.791 23:10:44 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.791 23:10:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:46.791 { 00:04:46.791 "name": "Malloc1", 00:04:46.791 "aliases": [ 00:04:46.791 "536ca8a4-ac62-40e4-b986-dd544cc9f82f" 00:04:46.791 ], 00:04:46.791 "product_name": "Malloc disk", 00:04:46.791 "block_size": 4096, 00:04:46.791 "num_blocks": 256, 00:04:46.791 "uuid": "536ca8a4-ac62-40e4-b986-dd544cc9f82f", 00:04:46.791 "assigned_rate_limits": { 00:04:46.791 "rw_ios_per_sec": 0, 00:04:46.791 "rw_mbytes_per_sec": 0, 00:04:46.791 "r_mbytes_per_sec": 0, 00:04:46.791 "w_mbytes_per_sec": 0 00:04:46.791 }, 00:04:46.791 "claimed": false, 00:04:46.791 "zoned": false, 00:04:46.791 "supported_io_types": { 00:04:46.791 "read": true, 00:04:46.791 "write": true, 00:04:46.791 "unmap": true, 00:04:46.791 "flush": true, 00:04:46.791 "reset": true, 00:04:46.791 "nvme_admin": false, 00:04:46.791 "nvme_io": false, 00:04:46.791 "nvme_io_md": false, 00:04:46.791 "write_zeroes": true, 00:04:46.791 "zcopy": true, 00:04:46.791 "get_zone_info": false, 00:04:46.791 "zone_management": false, 00:04:46.791 "zone_append": false, 00:04:46.791 "compare": false, 00:04:46.791 "compare_and_write": false, 00:04:46.791 "abort": true, 00:04:46.791 "seek_hole": false, 00:04:46.791 "seek_data": false, 00:04:46.791 "copy": true, 00:04:46.791 "nvme_iov_md": false 00:04:46.791 }, 00:04:46.791 "memory_domains": [ 00:04:46.791 { 00:04:46.791 "dma_device_id": "system", 00:04:46.791 "dma_device_type": 1 00:04:46.791 }, 00:04:46.791 { 00:04:46.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.791 "dma_device_type": 2 00:04:46.791 } 00:04:46.791 ], 00:04:46.791 "driver_specific": {} 00:04:46.791 } 00:04:46.791 ]' 00:04:46.791 23:10:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:47.048 23:10:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:47.048 23:10:44 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:47.048 23:10:44 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.048 23:10:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:47.048 23:10:44 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.048 23:10:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:47.048 23:10:44 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.048 23:10:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:47.048 23:10:44 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.048 23:10:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:47.048 23:10:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:47.048 23:10:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:47.048 00:04:47.048 real 0m0.118s 00:04:47.048 user 0m0.072s 00:04:47.048 sys 0m0.012s 00:04:47.048 23:10:44 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.048 23:10:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:47.048 ************************************ 00:04:47.048 END TEST rpc_plugins 00:04:47.048 ************************************ 00:04:47.048 23:10:44 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:47.048 23:10:44 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.048 23:10:44 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.048 23:10:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.048 ************************************ 00:04:47.048 START TEST rpc_trace_cmd_test 00:04:47.048 ************************************ 00:04:47.048 23:10:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:47.048 23:10:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:47.048 23:10:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:47.048 23:10:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.048 23:10:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:47.048 23:10:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.048 23:10:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:47.048 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1249486", 00:04:47.048 "tpoint_group_mask": "0x8", 00:04:47.048 "iscsi_conn": { 00:04:47.048 "mask": "0x2", 00:04:47.048 "tpoint_mask": "0x0" 00:04:47.048 }, 00:04:47.048 "scsi": { 00:04:47.049 "mask": "0x4", 00:04:47.049 "tpoint_mask": "0x0" 00:04:47.049 }, 00:04:47.049 "bdev": { 00:04:47.049 "mask": "0x8", 00:04:47.049 "tpoint_mask": "0xffffffffffffffff" 00:04:47.049 }, 00:04:47.049 "nvmf_rdma": { 00:04:47.049 "mask": "0x10", 00:04:47.049 "tpoint_mask": "0x0" 00:04:47.049 }, 00:04:47.049 "nvmf_tcp": { 00:04:47.049 "mask": "0x20", 00:04:47.049 "tpoint_mask": "0x0" 00:04:47.049 }, 00:04:47.049 "ftl": { 00:04:47.049 "mask": "0x40", 00:04:47.049 "tpoint_mask": "0x0" 00:04:47.049 }, 00:04:47.049 "blobfs": { 00:04:47.049 "mask": "0x80", 00:04:47.049 "tpoint_mask": "0x0" 00:04:47.049 }, 00:04:47.049 "dsa": { 00:04:47.049 "mask": "0x200", 00:04:47.049 "tpoint_mask": "0x0" 00:04:47.049 }, 00:04:47.049 "thread": { 00:04:47.049 "mask": "0x400", 00:04:47.049 "tpoint_mask": "0x0" 00:04:47.049 }, 00:04:47.049 "nvme_pcie": { 00:04:47.049 "mask": "0x800", 00:04:47.049 "tpoint_mask": "0x0" 00:04:47.049 }, 00:04:47.049 "iaa": { 00:04:47.049 "mask": "0x1000", 00:04:47.049 "tpoint_mask": "0x0" 00:04:47.049 }, 00:04:47.049 "nvme_tcp": { 00:04:47.049 "mask": "0x2000", 00:04:47.049 "tpoint_mask": "0x0" 00:04:47.049 }, 00:04:47.049 "bdev_nvme": { 00:04:47.049 "mask": "0x4000", 00:04:47.049 "tpoint_mask": "0x0" 00:04:47.049 }, 00:04:47.049 "sock": { 00:04:47.049 "mask": "0x8000", 00:04:47.049 "tpoint_mask": "0x0" 00:04:47.049 } 00:04:47.049 }' 00:04:47.049 23:10:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:47.049 23:10:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:47.049 23:10:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:47.049 23:10:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:47.049 23:10:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:47.049 23:10:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:47.049 23:10:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:47.307 23:10:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:47.307 23:10:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:47.307 23:10:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:47.307 00:04:47.307 real 0m0.201s 00:04:47.307 user 0m0.178s 00:04:47.307 sys 0m0.017s 00:04:47.307 23:10:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.307 23:10:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:47.307 ************************************ 00:04:47.307 END TEST rpc_trace_cmd_test 00:04:47.307 ************************************ 00:04:47.307 23:10:44 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:47.307 23:10:44 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:47.307 23:10:44 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:47.307 23:10:44 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.307 23:10:44 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.307 23:10:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.307 ************************************ 00:04:47.307 START TEST rpc_daemon_integrity 00:04:47.307 ************************************ 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:47.307 { 00:04:47.307 "name": "Malloc2", 00:04:47.307 "aliases": [ 00:04:47.307 "63909d70-da6e-4df9-b15b-522489d39a1f" 00:04:47.307 ], 00:04:47.307 "product_name": "Malloc disk", 00:04:47.307 "block_size": 512, 00:04:47.307 "num_blocks": 16384, 00:04:47.307 "uuid": "63909d70-da6e-4df9-b15b-522489d39a1f", 00:04:47.307 "assigned_rate_limits": { 00:04:47.307 "rw_ios_per_sec": 0, 00:04:47.307 "rw_mbytes_per_sec": 0, 00:04:47.307 "r_mbytes_per_sec": 0, 00:04:47.307 "w_mbytes_per_sec": 0 00:04:47.307 }, 00:04:47.307 "claimed": false, 00:04:47.307 "zoned": false, 00:04:47.307 "supported_io_types": { 00:04:47.307 "read": true, 00:04:47.307 "write": true, 00:04:47.307 "unmap": true, 00:04:47.307 "flush": true, 00:04:47.307 "reset": true, 00:04:47.307 "nvme_admin": false, 00:04:47.307 "nvme_io": false, 00:04:47.307 "nvme_io_md": false, 00:04:47.307 "write_zeroes": true, 00:04:47.307 "zcopy": true, 00:04:47.307 "get_zone_info": false, 00:04:47.307 "zone_management": false, 00:04:47.307 "zone_append": false, 00:04:47.307 "compare": false, 00:04:47.307 "compare_and_write": false, 00:04:47.307 "abort": true, 00:04:47.307 "seek_hole": false, 00:04:47.307 "seek_data": false, 00:04:47.307 "copy": true, 00:04:47.307 "nvme_iov_md": false 00:04:47.307 }, 00:04:47.307 "memory_domains": [ 00:04:47.307 { 00:04:47.307 "dma_device_id": "system", 00:04:47.307 "dma_device_type": 1 00:04:47.307 }, 00:04:47.307 { 00:04:47.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.307 "dma_device_type": 2 00:04:47.307 } 00:04:47.307 ], 00:04:47.307 "driver_specific": {} 00:04:47.307 } 00:04:47.307 ]' 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:47.307 23:10:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:47.308 23:10:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.308 23:10:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.308 [2024-07-25 23:10:44.977477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:47.308 [2024-07-25 23:10:44.977522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:47.308 [2024-07-25 23:10:44.977549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e8c490 00:04:47.308 [2024-07-25 23:10:44.977564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:47.308 [2024-07-25 23:10:44.978884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:47.308 [2024-07-25 23:10:44.978912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:47.308 Passthru0 00:04:47.308 23:10:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.308 23:10:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:47.308 23:10:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.308 23:10:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.308 23:10:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.308 23:10:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:47.308 { 00:04:47.308 "name": "Malloc2", 00:04:47.308 "aliases": [ 00:04:47.308 "63909d70-da6e-4df9-b15b-522489d39a1f" 00:04:47.308 ], 00:04:47.308 "product_name": "Malloc disk", 00:04:47.308 "block_size": 512, 00:04:47.308 "num_blocks": 16384, 00:04:47.308 "uuid": "63909d70-da6e-4df9-b15b-522489d39a1f", 00:04:47.308 "assigned_rate_limits": { 00:04:47.308 "rw_ios_per_sec": 0, 00:04:47.308 "rw_mbytes_per_sec": 0, 00:04:47.308 "r_mbytes_per_sec": 0, 00:04:47.308 "w_mbytes_per_sec": 0 00:04:47.308 }, 00:04:47.308 "claimed": true, 00:04:47.308 "claim_type": "exclusive_write", 00:04:47.308 "zoned": false, 00:04:47.308 "supported_io_types": { 00:04:47.308 "read": true, 00:04:47.308 "write": true, 00:04:47.308 "unmap": true, 00:04:47.308 "flush": true, 00:04:47.308 "reset": true, 00:04:47.308 "nvme_admin": false, 00:04:47.308 "nvme_io": false, 00:04:47.308 "nvme_io_md": false, 00:04:47.308 "write_zeroes": true, 00:04:47.308 "zcopy": true, 00:04:47.308 "get_zone_info": false, 00:04:47.308 "zone_management": false, 00:04:47.308 "zone_append": false, 00:04:47.308 "compare": false, 00:04:47.308 "compare_and_write": false, 00:04:47.308 "abort": true, 00:04:47.308 "seek_hole": false, 00:04:47.308 "seek_data": false, 00:04:47.308 "copy": true, 00:04:47.308 "nvme_iov_md": false 00:04:47.308 }, 00:04:47.308 "memory_domains": [ 00:04:47.308 { 00:04:47.308 "dma_device_id": "system", 00:04:47.308 "dma_device_type": 1 00:04:47.308 }, 00:04:47.308 { 00:04:47.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.308 "dma_device_type": 2 00:04:47.308 } 00:04:47.308 ], 00:04:47.308 "driver_specific": {} 00:04:47.308 }, 00:04:47.308 { 00:04:47.308 "name": "Passthru0", 00:04:47.308 "aliases": [ 00:04:47.308 "68220fa7-495e-5237-b7bf-543fb43ecffd" 00:04:47.308 ], 00:04:47.308 "product_name": "passthru", 00:04:47.308 "block_size": 512, 00:04:47.308 "num_blocks": 16384, 00:04:47.308 "uuid": "68220fa7-495e-5237-b7bf-543fb43ecffd", 00:04:47.308 "assigned_rate_limits": { 00:04:47.308 "rw_ios_per_sec": 0, 00:04:47.308 "rw_mbytes_per_sec": 0, 00:04:47.308 "r_mbytes_per_sec": 0, 00:04:47.308 "w_mbytes_per_sec": 0 00:04:47.308 }, 00:04:47.308 "claimed": false, 00:04:47.308 "zoned": false, 00:04:47.308 "supported_io_types": { 00:04:47.308 "read": true, 00:04:47.308 "write": true, 00:04:47.308 "unmap": true, 00:04:47.308 "flush": true, 00:04:47.308 "reset": true, 00:04:47.308 "nvme_admin": false, 00:04:47.308 "nvme_io": false, 00:04:47.308 "nvme_io_md": false, 00:04:47.308 "write_zeroes": true, 00:04:47.308 "zcopy": true, 00:04:47.308 "get_zone_info": false, 00:04:47.308 "zone_management": false, 00:04:47.308 "zone_append": false, 00:04:47.308 "compare": false, 00:04:47.308 "compare_and_write": false, 00:04:47.308 "abort": true, 00:04:47.308 "seek_hole": false, 00:04:47.308 "seek_data": false, 00:04:47.308 "copy": true, 00:04:47.308 "nvme_iov_md": false 00:04:47.308 }, 00:04:47.308 "memory_domains": [ 00:04:47.308 { 00:04:47.308 "dma_device_id": "system", 00:04:47.308 "dma_device_type": 1 00:04:47.308 }, 00:04:47.308 { 00:04:47.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.308 "dma_device_type": 2 00:04:47.308 } 00:04:47.308 ], 00:04:47.308 "driver_specific": { 00:04:47.308 "passthru": { 00:04:47.308 "name": "Passthru0", 00:04:47.308 "base_bdev_name": "Malloc2" 00:04:47.308 } 00:04:47.308 } 00:04:47.308 } 00:04:47.308 ]' 00:04:47.308 23:10:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:47.566 00:04:47.566 real 0m0.223s 00:04:47.566 user 0m0.143s 00:04:47.566 sys 0m0.025s 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.566 23:10:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.566 ************************************ 00:04:47.566 END TEST rpc_daemon_integrity 00:04:47.566 ************************************ 00:04:47.566 23:10:45 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:47.566 23:10:45 rpc -- rpc/rpc.sh@84 -- # killprocess 1249486 00:04:47.566 23:10:45 rpc -- common/autotest_common.sh@950 -- # '[' -z 1249486 ']' 00:04:47.566 23:10:45 rpc -- common/autotest_common.sh@954 -- # kill -0 1249486 00:04:47.566 23:10:45 rpc -- common/autotest_common.sh@955 -- # uname 00:04:47.566 23:10:45 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.566 23:10:45 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1249486 00:04:47.566 23:10:45 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.566 23:10:45 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.566 23:10:45 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1249486' 00:04:47.566 killing process with pid 1249486 00:04:47.566 23:10:45 rpc -- common/autotest_common.sh@969 -- # kill 1249486 00:04:47.566 23:10:45 rpc -- common/autotest_common.sh@974 -- # wait 1249486 00:04:48.131 00:04:48.131 real 0m1.908s 00:04:48.131 user 0m2.415s 00:04:48.131 sys 0m0.598s 00:04:48.131 23:10:45 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.131 23:10:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.131 ************************************ 00:04:48.131 END TEST rpc 00:04:48.131 ************************************ 00:04:48.131 23:10:45 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:48.131 23:10:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.131 23:10:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.131 23:10:45 -- common/autotest_common.sh@10 -- # set +x 00:04:48.131 ************************************ 00:04:48.131 START TEST skip_rpc 00:04:48.131 ************************************ 00:04:48.131 23:10:45 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:48.131 * Looking for test storage... 00:04:48.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:48.131 23:10:45 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:48.131 23:10:45 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:48.131 23:10:45 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:48.131 23:10:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.131 23:10:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.131 23:10:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.131 ************************************ 00:04:48.131 START TEST skip_rpc 00:04:48.131 ************************************ 00:04:48.131 23:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:48.131 23:10:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1249897 00:04:48.131 23:10:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:48.131 23:10:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.131 23:10:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:48.131 [2024-07-25 23:10:45.739431] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:04:48.131 [2024-07-25 23:10:45.739506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249897 ] 00:04:48.131 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.131 [2024-07-25 23:10:45.769877] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:48.131 [2024-07-25 23:10:45.800340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.390 [2024-07-25 23:10:45.896006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1249897 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1249897 ']' 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1249897 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1249897 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1249897' 00:04:53.694 killing process with pid 1249897 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1249897 00:04:53.694 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1249897 00:04:53.694 00:04:53.694 real 0m5.454s 00:04:53.694 user 0m5.131s 00:04:53.694 sys 0m0.316s 00:04:53.694 23:10:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.694 23:10:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.694 ************************************ 00:04:53.694 END TEST skip_rpc 00:04:53.694 ************************************ 00:04:53.694 23:10:51 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:53.694 23:10:51 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.694 23:10:51 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.694 23:10:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.694 ************************************ 00:04:53.694 START TEST skip_rpc_with_json 00:04:53.694 ************************************ 00:04:53.694 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:53.694 23:10:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:53.694 23:10:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1250586 00:04:53.694 23:10:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.694 23:10:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.694 23:10:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1250586 00:04:53.694 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1250586 ']' 00:04:53.694 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.694 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.694 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.694 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.694 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.694 [2024-07-25 23:10:51.244206] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:04:53.694 [2024-07-25 23:10:51.244307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250586 ] 00:04:53.694 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.694 [2024-07-25 23:10:51.276452] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:53.694 [2024-07-25 23:10:51.302008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.694 [2024-07-25 23:10:51.389930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.952 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:53.952 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:53.952 23:10:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:53.952 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.952 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.952 [2024-07-25 23:10:51.640606] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:53.952 request: 00:04:53.952 { 00:04:53.952 "trtype": "tcp", 00:04:53.952 "method": "nvmf_get_transports", 00:04:53.952 "req_id": 1 00:04:53.952 } 00:04:53.952 Got JSON-RPC error response 00:04:53.952 response: 00:04:53.952 { 00:04:53.952 "code": -19, 00:04:53.952 "message": "No such device" 00:04:53.952 } 00:04:53.952 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:53.952 23:10:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:53.952 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.952 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.952 [2024-07-25 23:10:51.648754] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:53.952 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.952 23:10:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:53.952 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.952 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.210 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.210 23:10:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:54.210 { 00:04:54.210 "subsystems": [ 00:04:54.210 { 00:04:54.210 "subsystem": "vfio_user_target", 00:04:54.210 "config": null 00:04:54.210 }, 00:04:54.210 { 00:04:54.210 "subsystem": "keyring", 00:04:54.210 "config": [] 00:04:54.210 }, 00:04:54.210 { 00:04:54.210 "subsystem": "iobuf", 00:04:54.210 "config": [ 00:04:54.210 { 00:04:54.210 "method": "iobuf_set_options", 00:04:54.210 "params": { 00:04:54.210 "small_pool_count": 8192, 00:04:54.210 "large_pool_count": 1024, 00:04:54.210 "small_bufsize": 8192, 00:04:54.210 "large_bufsize": 135168 00:04:54.210 } 00:04:54.210 } 00:04:54.210 ] 00:04:54.210 }, 00:04:54.210 { 00:04:54.210 "subsystem": "sock", 00:04:54.210 "config": [ 00:04:54.210 { 00:04:54.210 "method": "sock_set_default_impl", 00:04:54.210 "params": { 00:04:54.210 "impl_name": "posix" 00:04:54.210 } 00:04:54.210 }, 00:04:54.210 { 00:04:54.210 "method": "sock_impl_set_options", 00:04:54.210 "params": { 00:04:54.210 "impl_name": "ssl", 00:04:54.210 "recv_buf_size": 4096, 00:04:54.210 "send_buf_size": 4096, 00:04:54.210 "enable_recv_pipe": true, 00:04:54.210 "enable_quickack": false, 00:04:54.210 "enable_placement_id": 0, 00:04:54.210 "enable_zerocopy_send_server": true, 00:04:54.210 "enable_zerocopy_send_client": false, 00:04:54.210 "zerocopy_threshold": 0, 00:04:54.210 "tls_version": 0, 00:04:54.210 "enable_ktls": false 00:04:54.210 } 00:04:54.210 }, 00:04:54.210 { 00:04:54.210 "method": "sock_impl_set_options", 00:04:54.210 "params": { 00:04:54.210 "impl_name": "posix", 00:04:54.210 "recv_buf_size": 2097152, 00:04:54.210 "send_buf_size": 2097152, 00:04:54.210 "enable_recv_pipe": true, 00:04:54.210 "enable_quickack": false, 00:04:54.210 "enable_placement_id": 0, 00:04:54.210 "enable_zerocopy_send_server": true, 00:04:54.210 "enable_zerocopy_send_client": false, 00:04:54.210 "zerocopy_threshold": 0, 00:04:54.210 "tls_version": 0, 00:04:54.210 "enable_ktls": false 00:04:54.210 } 00:04:54.210 } 00:04:54.210 ] 00:04:54.210 }, 00:04:54.210 { 00:04:54.210 "subsystem": "vmd", 00:04:54.210 "config": [] 00:04:54.210 }, 00:04:54.210 { 00:04:54.210 "subsystem": "accel", 00:04:54.211 "config": [ 00:04:54.211 { 00:04:54.211 "method": "accel_set_options", 00:04:54.211 "params": { 00:04:54.211 "small_cache_size": 128, 00:04:54.211 "large_cache_size": 16, 00:04:54.211 "task_count": 2048, 00:04:54.211 "sequence_count": 2048, 00:04:54.211 "buf_count": 2048 00:04:54.211 } 00:04:54.211 } 00:04:54.211 ] 00:04:54.211 }, 00:04:54.211 { 00:04:54.211 "subsystem": "bdev", 00:04:54.211 "config": [ 00:04:54.211 { 00:04:54.211 "method": "bdev_set_options", 00:04:54.211 "params": { 00:04:54.211 "bdev_io_pool_size": 65535, 00:04:54.211 "bdev_io_cache_size": 256, 00:04:54.211 "bdev_auto_examine": true, 00:04:54.211 "iobuf_small_cache_size": 128, 00:04:54.211 "iobuf_large_cache_size": 16 00:04:54.211 } 00:04:54.211 }, 00:04:54.211 { 00:04:54.211 "method": "bdev_raid_set_options", 00:04:54.211 "params": { 00:04:54.211 "process_window_size_kb": 1024, 00:04:54.211 "process_max_bandwidth_mb_sec": 0 00:04:54.211 } 00:04:54.211 }, 00:04:54.211 { 00:04:54.211 "method": "bdev_iscsi_set_options", 00:04:54.211 "params": { 00:04:54.211 "timeout_sec": 30 00:04:54.211 } 00:04:54.211 }, 00:04:54.211 { 00:04:54.211 "method": "bdev_nvme_set_options", 00:04:54.211 "params": { 00:04:54.211 "action_on_timeout": "none", 00:04:54.211 "timeout_us": 0, 00:04:54.211 "timeout_admin_us": 0, 00:04:54.211 "keep_alive_timeout_ms": 10000, 00:04:54.211 "arbitration_burst": 0, 00:04:54.211 "low_priority_weight": 0, 00:04:54.211 "medium_priority_weight": 0, 00:04:54.211 "high_priority_weight": 0, 00:04:54.211 "nvme_adminq_poll_period_us": 10000, 00:04:54.211 "nvme_ioq_poll_period_us": 0, 00:04:54.211 "io_queue_requests": 0, 00:04:54.211 "delay_cmd_submit": true, 00:04:54.211 "transport_retry_count": 4, 00:04:54.211 "bdev_retry_count": 3, 00:04:54.211 "transport_ack_timeout": 0, 00:04:54.211 "ctrlr_loss_timeout_sec": 0, 00:04:54.211 "reconnect_delay_sec": 0, 00:04:54.211 "fast_io_fail_timeout_sec": 0, 00:04:54.211 "disable_auto_failback": false, 00:04:54.211 "generate_uuids": false, 00:04:54.211 "transport_tos": 0, 00:04:54.211 "nvme_error_stat": false, 00:04:54.211 "rdma_srq_size": 0, 00:04:54.211 "io_path_stat": false, 00:04:54.211 "allow_accel_sequence": false, 00:04:54.211 "rdma_max_cq_size": 0, 00:04:54.211 "rdma_cm_event_timeout_ms": 0, 00:04:54.211 "dhchap_digests": [ 00:04:54.211 "sha256", 00:04:54.211 "sha384", 00:04:54.211 "sha512" 00:04:54.211 ], 00:04:54.211 "dhchap_dhgroups": [ 00:04:54.211 "null", 00:04:54.211 "ffdhe2048", 00:04:54.211 "ffdhe3072", 00:04:54.211 "ffdhe4096", 00:04:54.211 "ffdhe6144", 00:04:54.211 "ffdhe8192" 00:04:54.211 ] 00:04:54.211 } 00:04:54.211 }, 00:04:54.211 { 00:04:54.211 "method": "bdev_nvme_set_hotplug", 00:04:54.211 "params": { 00:04:54.211 "period_us": 100000, 00:04:54.211 "enable": false 00:04:54.211 } 00:04:54.211 }, 00:04:54.211 { 00:04:54.211 "method": "bdev_wait_for_examine" 00:04:54.211 } 00:04:54.211 ] 00:04:54.211 }, 00:04:54.211 { 00:04:54.211 "subsystem": "scsi", 00:04:54.211 "config": null 00:04:54.211 }, 00:04:54.211 { 00:04:54.211 "subsystem": "scheduler", 00:04:54.211 "config": [ 00:04:54.211 { 00:04:54.211 "method": "framework_set_scheduler", 00:04:54.211 "params": { 00:04:54.211 "name": "static" 00:04:54.211 } 00:04:54.211 } 00:04:54.211 ] 00:04:54.211 }, 00:04:54.211 { 00:04:54.211 "subsystem": "vhost_scsi", 00:04:54.211 "config": [] 00:04:54.211 }, 00:04:54.211 { 00:04:54.211 "subsystem": "vhost_blk", 00:04:54.211 "config": [] 00:04:54.211 }, 00:04:54.211 { 00:04:54.211 "subsystem": "ublk", 00:04:54.211 "config": [] 00:04:54.211 }, 00:04:54.211 { 00:04:54.211 "subsystem": "nbd", 00:04:54.211 "config": [] 00:04:54.211 }, 00:04:54.211 { 00:04:54.211 "subsystem": "nvmf", 00:04:54.211 "config": [ 00:04:54.211 { 00:04:54.211 "method": "nvmf_set_config", 00:04:54.211 "params": { 00:04:54.211 "discovery_filter": "match_any", 00:04:54.211 "admin_cmd_passthru": { 00:04:54.211 "identify_ctrlr": false 00:04:54.211 } 00:04:54.211 } 00:04:54.211 }, 00:04:54.211 { 00:04:54.211 "method": "nvmf_set_max_subsystems", 00:04:54.211 "params": { 00:04:54.211 "max_subsystems": 1024 00:04:54.211 } 00:04:54.211 }, 00:04:54.211 { 00:04:54.211 "method": "nvmf_set_crdt", 00:04:54.211 "params": { 00:04:54.211 "crdt1": 0, 00:04:54.211 "crdt2": 0, 00:04:54.211 "crdt3": 0 00:04:54.211 } 00:04:54.211 }, 00:04:54.211 { 00:04:54.211 "method": "nvmf_create_transport", 00:04:54.211 "params": { 00:04:54.211 "trtype": "TCP", 00:04:54.211 "max_queue_depth": 128, 00:04:54.211 "max_io_qpairs_per_ctrlr": 127, 00:04:54.211 "in_capsule_data_size": 4096, 00:04:54.211 "max_io_size": 131072, 00:04:54.211 "io_unit_size": 131072, 00:04:54.211 "max_aq_depth": 128, 00:04:54.211 "num_shared_buffers": 511, 00:04:54.211 "buf_cache_size": 4294967295, 00:04:54.211 "dif_insert_or_strip": false, 00:04:54.211 "zcopy": false, 00:04:54.211 "c2h_success": true, 00:04:54.211 "sock_priority": 0, 00:04:54.211 "abort_timeout_sec": 1, 00:04:54.211 "ack_timeout": 0, 00:04:54.211 "data_wr_pool_size": 0 00:04:54.211 } 00:04:54.211 } 00:04:54.211 ] 00:04:54.211 }, 00:04:54.211 { 00:04:54.211 "subsystem": "iscsi", 00:04:54.211 "config": [ 00:04:54.211 { 00:04:54.211 "method": "iscsi_set_options", 00:04:54.211 "params": { 00:04:54.211 "node_base": "iqn.2016-06.io.spdk", 00:04:54.211 "max_sessions": 128, 00:04:54.211 "max_connections_per_session": 2, 00:04:54.211 "max_queue_depth": 64, 00:04:54.211 "default_time2wait": 2, 00:04:54.211 "default_time2retain": 20, 00:04:54.211 "first_burst_length": 8192, 00:04:54.211 "immediate_data": true, 00:04:54.211 "allow_duplicated_isid": false, 00:04:54.211 "error_recovery_level": 0, 00:04:54.211 "nop_timeout": 60, 00:04:54.211 "nop_in_interval": 30, 00:04:54.211 "disable_chap": false, 00:04:54.211 "require_chap": false, 00:04:54.211 "mutual_chap": false, 00:04:54.211 "chap_group": 0, 00:04:54.211 "max_large_datain_per_connection": 64, 00:04:54.211 "max_r2t_per_connection": 4, 00:04:54.211 "pdu_pool_size": 36864, 00:04:54.211 "immediate_data_pool_size": 16384, 00:04:54.211 "data_out_pool_size": 2048 00:04:54.211 } 00:04:54.211 } 00:04:54.211 ] 00:04:54.211 } 00:04:54.211 ] 00:04:54.211 } 00:04:54.211 23:10:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:54.211 23:10:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1250586 00:04:54.211 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1250586 ']' 00:04:54.211 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1250586 00:04:54.211 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:54.211 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.211 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1250586 00:04:54.211 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.211 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.211 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1250586' 00:04:54.211 killing process with pid 1250586 00:04:54.211 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1250586 00:04:54.211 23:10:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1250586 00:04:54.778 23:10:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1250728 00:04:54.778 23:10:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:54.778 23:10:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1250728 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1250728 ']' 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1250728 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1250728 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1250728' 00:05:00.037 killing process with pid 1250728 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1250728 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1250728 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:00.037 00:05:00.037 real 0m6.486s 00:05:00.037 user 0m6.092s 00:05:00.037 sys 0m0.679s 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.037 ************************************ 00:05:00.037 END TEST skip_rpc_with_json 00:05:00.037 ************************************ 00:05:00.037 23:10:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:00.037 23:10:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.037 23:10:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.037 23:10:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.037 ************************************ 00:05:00.037 START TEST skip_rpc_with_delay 00:05:00.037 ************************************ 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:00.037 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.295 [2024-07-25 23:10:57.778275] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:00.295 [2024-07-25 23:10:57.778375] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:00.295 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:00.295 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:00.295 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:00.295 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:00.295 00:05:00.295 real 0m0.068s 00:05:00.295 user 0m0.045s 00:05:00.295 sys 0m0.022s 00:05:00.295 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.295 23:10:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:00.295 ************************************ 00:05:00.295 END TEST skip_rpc_with_delay 00:05:00.295 ************************************ 00:05:00.295 23:10:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:00.295 23:10:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:00.295 23:10:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:00.295 23:10:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.295 23:10:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.295 23:10:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.295 ************************************ 00:05:00.295 START TEST exit_on_failed_rpc_init 00:05:00.295 ************************************ 00:05:00.295 23:10:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:00.295 23:10:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1251446 00:05:00.295 23:10:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.295 23:10:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1251446 00:05:00.295 23:10:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1251446 ']' 00:05:00.295 23:10:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.296 23:10:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.296 23:10:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.296 23:10:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.296 23:10:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.296 [2024-07-25 23:10:57.891947] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:00.296 [2024-07-25 23:10:57.892023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251446 ] 00:05:00.296 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.296 [2024-07-25 23:10:57.923632] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:00.296 [2024-07-25 23:10:57.953602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.554 [2024-07-25 23:10:58.046934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.812 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.812 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:00.812 23:10:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.812 23:10:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.812 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:00.812 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.812 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.812 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.812 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.812 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.812 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.812 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.812 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.812 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:00.812 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.812 [2024-07-25 23:10:58.358469] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:00.812 [2024-07-25 23:10:58.358548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251456 ] 00:05:00.812 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.812 [2024-07-25 23:10:58.390750] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:00.812 [2024-07-25 23:10:58.420918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.812 [2024-07-25 23:10:58.514279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.812 [2024-07-25 23:10:58.514406] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:00.812 [2024-07-25 23:10:58.514428] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:00.812 [2024-07-25 23:10:58.514441] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:01.069 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:01.069 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:01.070 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:01.070 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:01.070 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:01.070 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:01.070 23:10:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:01.070 23:10:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1251446 00:05:01.070 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1251446 ']' 00:05:01.070 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1251446 00:05:01.070 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:01.070 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.070 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1251446 00:05:01.070 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:01.070 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:01.070 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1251446' 00:05:01.070 killing process with pid 1251446 00:05:01.070 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1251446 00:05:01.070 23:10:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1251446 00:05:01.327 00:05:01.327 real 0m1.196s 00:05:01.327 user 0m1.327s 00:05:01.327 sys 0m0.455s 00:05:01.327 23:10:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.327 23:10:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.327 ************************************ 00:05:01.327 END TEST exit_on_failed_rpc_init 00:05:01.327 ************************************ 00:05:01.585 23:10:59 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:01.585 00:05:01.585 real 0m13.453s 00:05:01.585 user 0m12.699s 00:05:01.585 sys 0m1.632s 00:05:01.585 23:10:59 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.585 23:10:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.585 ************************************ 00:05:01.585 END TEST skip_rpc 00:05:01.585 ************************************ 00:05:01.585 23:10:59 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:01.585 23:10:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.585 23:10:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.585 23:10:59 -- common/autotest_common.sh@10 -- # set +x 00:05:01.585 ************************************ 00:05:01.585 START TEST rpc_client 00:05:01.585 ************************************ 00:05:01.585 23:10:59 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:01.585 * Looking for test storage... 00:05:01.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:01.585 23:10:59 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:01.585 OK 00:05:01.585 23:10:59 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:01.585 00:05:01.585 real 0m0.065s 00:05:01.585 user 0m0.030s 00:05:01.585 sys 0m0.038s 00:05:01.585 23:10:59 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.585 23:10:59 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:01.585 ************************************ 00:05:01.585 END TEST rpc_client 00:05:01.585 ************************************ 00:05:01.585 23:10:59 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:01.585 23:10:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.585 23:10:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.585 23:10:59 -- common/autotest_common.sh@10 -- # set +x 00:05:01.585 ************************************ 00:05:01.585 START TEST json_config 00:05:01.585 ************************************ 00:05:01.585 23:10:59 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:01.585 23:10:59 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:01.585 23:10:59 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.585 23:10:59 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.585 23:10:59 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.585 23:10:59 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.585 23:10:59 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.585 23:10:59 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.585 23:10:59 json_config -- paths/export.sh@5 -- # export PATH 00:05:01.585 23:10:59 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.585 23:10:59 json_config -- nvmf/common.sh@47 -- # : 0 00:05:01.586 23:10:59 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:01.586 23:10:59 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:01.586 23:10:59 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.586 23:10:59 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.586 23:10:59 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.586 23:10:59 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:01.586 23:10:59 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:01.586 23:10:59 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:01.586 INFO: JSON configuration test init 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:01.586 23:10:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:01.586 23:10:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:01.586 23:10:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:01.586 23:10:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.586 23:10:59 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:01.586 23:10:59 json_config -- json_config/common.sh@9 -- # local app=target 00:05:01.586 23:10:59 json_config -- json_config/common.sh@10 -- # shift 00:05:01.586 23:10:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:01.586 23:10:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:01.586 23:10:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:01.586 23:10:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.586 23:10:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.586 23:10:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1251698 00:05:01.586 23:10:59 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:01.586 23:10:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:01.586 Waiting for target to run... 00:05:01.586 23:10:59 json_config -- json_config/common.sh@25 -- # waitforlisten 1251698 /var/tmp/spdk_tgt.sock 00:05:01.586 23:10:59 json_config -- common/autotest_common.sh@831 -- # '[' -z 1251698 ']' 00:05:01.586 23:10:59 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:01.586 23:10:59 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.586 23:10:59 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:01.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:01.586 23:10:59 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.586 23:10:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.844 [2024-07-25 23:10:59.323650] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:01.844 [2024-07-25 23:10:59.323739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251698 ] 00:05:01.844 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.101 [2024-07-25 23:10:59.637400] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:02.101 [2024-07-25 23:10:59.670764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.101 [2024-07-25 23:10:59.733775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.665 23:11:00 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.665 23:11:00 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:02.665 23:11:00 json_config -- json_config/common.sh@26 -- # echo '' 00:05:02.665 00:05:02.665 23:11:00 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:02.665 23:11:00 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:02.665 23:11:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.665 23:11:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.665 23:11:00 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:02.665 23:11:00 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:02.665 23:11:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.665 23:11:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.665 23:11:00 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:02.665 23:11:00 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:02.665 23:11:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:05.946 23:11:03 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:05.946 23:11:03 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:05.946 23:11:03 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.946 23:11:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.946 23:11:03 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:05.946 23:11:03 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:05.946 23:11:03 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:05.946 23:11:03 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:05.946 23:11:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:05.946 23:11:03 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@51 -- # sort 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:06.203 23:11:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.203 23:11:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:06.203 23:11:03 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:06.203 23:11:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:06.203 23:11:03 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:06.203 23:11:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:06.460 MallocForNvmf0 00:05:06.460 23:11:03 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:06.460 23:11:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:06.717 MallocForNvmf1 00:05:06.717 23:11:04 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:06.717 23:11:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:06.975 [2024-07-25 23:11:04.465872] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.975 23:11:04 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:06.975 23:11:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:07.233 23:11:04 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:07.233 23:11:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:07.490 23:11:04 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:07.490 23:11:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:07.747 23:11:05 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:07.748 23:11:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:07.748 [2024-07-25 23:11:05.445029] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:07.748 23:11:05 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:07.748 23:11:05 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:07.748 23:11:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.005 23:11:05 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:08.005 23:11:05 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:08.005 23:11:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.005 23:11:05 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:08.005 23:11:05 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:08.005 23:11:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:08.263 MallocBdevForConfigChangeCheck 00:05:08.263 23:11:05 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:08.263 23:11:05 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:08.263 23:11:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.263 23:11:05 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:08.263 23:11:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.520 23:11:06 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:08.520 INFO: shutting down applications... 00:05:08.520 23:11:06 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:08.520 23:11:06 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:08.520 23:11:06 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:08.520 23:11:06 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:10.419 Calling clear_iscsi_subsystem 00:05:10.419 Calling clear_nvmf_subsystem 00:05:10.419 Calling clear_nbd_subsystem 00:05:10.419 Calling clear_ublk_subsystem 00:05:10.419 Calling clear_vhost_blk_subsystem 00:05:10.419 Calling clear_vhost_scsi_subsystem 00:05:10.419 Calling clear_bdev_subsystem 00:05:10.419 23:11:07 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:10.419 23:11:07 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:10.419 23:11:07 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:10.419 23:11:07 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.419 23:11:07 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:10.419 23:11:07 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:10.677 23:11:08 json_config -- json_config/json_config.sh@349 -- # break 00:05:10.677 23:11:08 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:10.677 23:11:08 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:10.677 23:11:08 json_config -- json_config/common.sh@31 -- # local app=target 00:05:10.677 23:11:08 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:10.677 23:11:08 json_config -- json_config/common.sh@35 -- # [[ -n 1251698 ]] 00:05:10.677 23:11:08 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1251698 00:05:10.677 23:11:08 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:10.677 23:11:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.677 23:11:08 json_config -- json_config/common.sh@41 -- # kill -0 1251698 00:05:10.677 23:11:08 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.243 23:11:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.243 23:11:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.243 23:11:08 json_config -- json_config/common.sh@41 -- # kill -0 1251698 00:05:11.243 23:11:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:11.243 23:11:08 json_config -- json_config/common.sh@43 -- # break 00:05:11.243 23:11:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:11.243 23:11:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:11.243 SPDK target shutdown done 00:05:11.243 23:11:08 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:11.243 INFO: relaunching applications... 00:05:11.243 23:11:08 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.243 23:11:08 json_config -- json_config/common.sh@9 -- # local app=target 00:05:11.243 23:11:08 json_config -- json_config/common.sh@10 -- # shift 00:05:11.243 23:11:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:11.243 23:11:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:11.243 23:11:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:11.243 23:11:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.243 23:11:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.243 23:11:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1253011 00:05:11.243 23:11:08 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.243 23:11:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:11.243 Waiting for target to run... 00:05:11.243 23:11:08 json_config -- json_config/common.sh@25 -- # waitforlisten 1253011 /var/tmp/spdk_tgt.sock 00:05:11.243 23:11:08 json_config -- common/autotest_common.sh@831 -- # '[' -z 1253011 ']' 00:05:11.243 23:11:08 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.243 23:11:08 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.243 23:11:08 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.243 23:11:08 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.243 23:11:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.243 [2024-07-25 23:11:08.738205] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:11.243 [2024-07-25 23:11:08.738301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1253011 ] 00:05:11.243 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.504 [2024-07-25 23:11:09.220988] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:11.764 [2024-07-25 23:11:09.254749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.764 [2024-07-25 23:11:09.337638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.076 [2024-07-25 23:11:12.370915] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:15.076 [2024-07-25 23:11:12.403399] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:15.640 23:11:13 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.640 23:11:13 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:15.640 23:11:13 json_config -- json_config/common.sh@26 -- # echo '' 00:05:15.640 00:05:15.640 23:11:13 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:15.640 23:11:13 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:15.640 INFO: Checking if target configuration is the same... 00:05:15.640 23:11:13 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.640 23:11:13 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:15.640 23:11:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:15.640 + '[' 2 -ne 2 ']' 00:05:15.640 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:15.640 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:15.640 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:15.640 +++ basename /dev/fd/62 00:05:15.641 ++ mktemp /tmp/62.XXX 00:05:15.641 + tmp_file_1=/tmp/62.Ded 00:05:15.641 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.641 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:15.641 + tmp_file_2=/tmp/spdk_tgt_config.json.zej 00:05:15.641 + ret=0 00:05:15.641 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.897 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.897 + diff -u /tmp/62.Ded /tmp/spdk_tgt_config.json.zej 00:05:15.897 + echo 'INFO: JSON config files are the same' 00:05:15.897 INFO: JSON config files are the same 00:05:15.897 + rm /tmp/62.Ded /tmp/spdk_tgt_config.json.zej 00:05:15.897 + exit 0 00:05:15.897 23:11:13 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:15.897 23:11:13 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:15.897 INFO: changing configuration and checking if this can be detected... 00:05:15.897 23:11:13 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:15.897 23:11:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:16.154 23:11:13 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.154 23:11:13 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:16.154 23:11:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.154 + '[' 2 -ne 2 ']' 00:05:16.154 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:16.154 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:16.154 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:16.154 +++ basename /dev/fd/62 00:05:16.154 ++ mktemp /tmp/62.XXX 00:05:16.154 + tmp_file_1=/tmp/62.eak 00:05:16.154 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.154 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:16.154 + tmp_file_2=/tmp/spdk_tgt_config.json.ei9 00:05:16.154 + ret=0 00:05:16.154 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:16.718 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:16.718 + diff -u /tmp/62.eak /tmp/spdk_tgt_config.json.ei9 00:05:16.718 + ret=1 00:05:16.718 + echo '=== Start of file: /tmp/62.eak ===' 00:05:16.718 + cat /tmp/62.eak 00:05:16.718 + echo '=== End of file: /tmp/62.eak ===' 00:05:16.718 + echo '' 00:05:16.718 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ei9 ===' 00:05:16.718 + cat /tmp/spdk_tgt_config.json.ei9 00:05:16.718 + echo '=== End of file: /tmp/spdk_tgt_config.json.ei9 ===' 00:05:16.718 + echo '' 00:05:16.718 + rm /tmp/62.eak /tmp/spdk_tgt_config.json.ei9 00:05:16.718 + exit 1 00:05:16.718 23:11:14 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:16.718 INFO: configuration change detected. 00:05:16.718 23:11:14 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:16.718 23:11:14 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:16.718 23:11:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.718 23:11:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.718 23:11:14 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:16.718 23:11:14 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:16.718 23:11:14 json_config -- json_config/json_config.sh@321 -- # [[ -n 1253011 ]] 00:05:16.718 23:11:14 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:16.718 23:11:14 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:16.718 23:11:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.718 23:11:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.718 23:11:14 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:16.718 23:11:14 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:16.718 23:11:14 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:16.718 23:11:14 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:16.718 23:11:14 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:16.718 23:11:14 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:16.718 23:11:14 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:16.718 23:11:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.718 23:11:14 json_config -- json_config/json_config.sh@327 -- # killprocess 1253011 00:05:16.718 23:11:14 json_config -- common/autotest_common.sh@950 -- # '[' -z 1253011 ']' 00:05:16.718 23:11:14 json_config -- common/autotest_common.sh@954 -- # kill -0 1253011 00:05:16.718 23:11:14 json_config -- common/autotest_common.sh@955 -- # uname 00:05:16.718 23:11:14 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.718 23:11:14 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1253011 00:05:16.718 23:11:14 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.718 23:11:14 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.718 23:11:14 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1253011' 00:05:16.718 killing process with pid 1253011 00:05:16.718 23:11:14 json_config -- common/autotest_common.sh@969 -- # kill 1253011 00:05:16.718 23:11:14 json_config -- common/autotest_common.sh@974 -- # wait 1253011 00:05:18.616 23:11:15 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.616 23:11:15 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:18.616 23:11:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:18.616 23:11:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.616 23:11:15 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:18.616 23:11:15 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:18.616 INFO: Success 00:05:18.616 00:05:18.616 real 0m16.751s 00:05:18.616 user 0m18.758s 00:05:18.616 sys 0m2.026s 00:05:18.616 23:11:15 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.616 23:11:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.616 ************************************ 00:05:18.616 END TEST json_config 00:05:18.616 ************************************ 00:05:18.616 23:11:15 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:18.616 23:11:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.616 23:11:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.616 23:11:15 -- common/autotest_common.sh@10 -- # set +x 00:05:18.616 ************************************ 00:05:18.616 START TEST json_config_extra_key 00:05:18.616 ************************************ 00:05:18.616 23:11:16 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:18.616 23:11:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:18.616 23:11:16 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.616 23:11:16 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.616 23:11:16 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.616 23:11:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.616 23:11:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.616 23:11:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.616 23:11:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:18.616 23:11:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:18.616 23:11:16 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:18.616 23:11:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:18.616 23:11:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:18.616 23:11:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:18.616 23:11:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:18.616 23:11:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:18.616 23:11:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:18.616 23:11:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:18.616 23:11:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:18.616 23:11:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:18.616 23:11:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:18.616 23:11:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:18.616 INFO: launching applications... 00:05:18.616 23:11:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:18.616 23:11:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:18.616 23:11:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:18.616 23:11:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:18.616 23:11:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:18.616 23:11:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:18.616 23:11:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.616 23:11:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.616 23:11:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1253940 00:05:18.616 23:11:16 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:18.616 23:11:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:18.616 Waiting for target to run... 00:05:18.616 23:11:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1253940 /var/tmp/spdk_tgt.sock 00:05:18.616 23:11:16 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1253940 ']' 00:05:18.616 23:11:16 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:18.616 23:11:16 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.616 23:11:16 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:18.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:18.616 23:11:16 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.616 23:11:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:18.616 [2024-07-25 23:11:16.108002] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:18.616 [2024-07-25 23:11:16.108111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1253940 ] 00:05:18.616 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.875 [2024-07-25 23:11:16.418976] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:18.875 [2024-07-25 23:11:16.452457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.875 [2024-07-25 23:11:16.515843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.440 23:11:17 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.440 23:11:17 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:19.440 23:11:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:19.440 00:05:19.440 23:11:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:19.440 INFO: shutting down applications... 00:05:19.440 23:11:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:19.440 23:11:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:19.440 23:11:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:19.440 23:11:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1253940 ]] 00:05:19.440 23:11:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1253940 00:05:19.440 23:11:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:19.440 23:11:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.440 23:11:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1253940 00:05:19.440 23:11:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.007 23:11:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.007 23:11:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.007 23:11:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1253940 00:05:20.007 23:11:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:20.007 23:11:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:20.007 23:11:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:20.007 23:11:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:20.007 SPDK target shutdown done 00:05:20.007 23:11:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:20.007 Success 00:05:20.007 00:05:20.007 real 0m1.520s 00:05:20.007 user 0m1.461s 00:05:20.007 sys 0m0.431s 00:05:20.007 23:11:17 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.007 23:11:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:20.007 ************************************ 00:05:20.007 END TEST json_config_extra_key 00:05:20.007 ************************************ 00:05:20.007 23:11:17 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.007 23:11:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.007 23:11:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.007 23:11:17 -- common/autotest_common.sh@10 -- # set +x 00:05:20.007 ************************************ 00:05:20.007 START TEST alias_rpc 00:05:20.007 ************************************ 00:05:20.007 23:11:17 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.007 * Looking for test storage... 00:05:20.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:20.007 23:11:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:20.007 23:11:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1254174 00:05:20.007 23:11:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.007 23:11:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1254174 00:05:20.007 23:11:17 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1254174 ']' 00:05:20.007 23:11:17 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.007 23:11:17 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.007 23:11:17 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.007 23:11:17 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.007 23:11:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.007 [2024-07-25 23:11:17.685707] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:20.007 [2024-07-25 23:11:17.685802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254174 ] 00:05:20.007 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.007 [2024-07-25 23:11:17.717920] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:20.265 [2024-07-25 23:11:17.745785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.265 [2024-07-25 23:11:17.829932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.524 23:11:18 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.524 23:11:18 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:20.524 23:11:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:20.782 23:11:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1254174 00:05:20.782 23:11:18 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1254174 ']' 00:05:20.782 23:11:18 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1254174 00:05:20.782 23:11:18 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:20.782 23:11:18 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.782 23:11:18 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1254174 00:05:20.782 23:11:18 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.782 23:11:18 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.782 23:11:18 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1254174' 00:05:20.782 killing process with pid 1254174 00:05:20.782 23:11:18 alias_rpc -- common/autotest_common.sh@969 -- # kill 1254174 00:05:20.782 23:11:18 alias_rpc -- common/autotest_common.sh@974 -- # wait 1254174 00:05:21.348 00:05:21.348 real 0m1.221s 00:05:21.348 user 0m1.308s 00:05:21.348 sys 0m0.423s 00:05:21.348 23:11:18 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.348 23:11:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.348 ************************************ 00:05:21.348 END TEST alias_rpc 00:05:21.348 ************************************ 00:05:21.348 23:11:18 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:21.348 23:11:18 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:21.348 23:11:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.348 23:11:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.348 23:11:18 -- common/autotest_common.sh@10 -- # set +x 00:05:21.348 ************************************ 00:05:21.348 START TEST spdkcli_tcp 00:05:21.348 ************************************ 00:05:21.348 23:11:18 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:21.348 * Looking for test storage... 00:05:21.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:21.348 23:11:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:21.348 23:11:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:21.348 23:11:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:21.348 23:11:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:21.348 23:11:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:21.348 23:11:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:21.348 23:11:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:21.348 23:11:18 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:21.348 23:11:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.348 23:11:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1254430 00:05:21.348 23:11:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:21.348 23:11:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1254430 00:05:21.348 23:11:18 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1254430 ']' 00:05:21.348 23:11:18 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.348 23:11:18 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.348 23:11:18 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.348 23:11:18 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.348 23:11:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.348 [2024-07-25 23:11:18.947762] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:21.348 [2024-07-25 23:11:18.947841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254430 ] 00:05:21.348 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.348 [2024-07-25 23:11:18.980860] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:21.348 [2024-07-25 23:11:19.009010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.607 [2024-07-25 23:11:19.099585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.607 [2024-07-25 23:11:19.099590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.864 23:11:19 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.864 23:11:19 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:21.864 23:11:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1254441 00:05:21.864 23:11:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:21.864 23:11:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:22.122 [ 00:05:22.122 "bdev_malloc_delete", 00:05:22.122 "bdev_malloc_create", 00:05:22.122 "bdev_null_resize", 00:05:22.122 "bdev_null_delete", 00:05:22.122 "bdev_null_create", 00:05:22.122 "bdev_nvme_cuse_unregister", 00:05:22.122 "bdev_nvme_cuse_register", 00:05:22.122 "bdev_opal_new_user", 00:05:22.122 "bdev_opal_set_lock_state", 00:05:22.122 "bdev_opal_delete", 00:05:22.122 "bdev_opal_get_info", 00:05:22.122 "bdev_opal_create", 00:05:22.122 "bdev_nvme_opal_revert", 00:05:22.122 "bdev_nvme_opal_init", 00:05:22.122 "bdev_nvme_send_cmd", 00:05:22.122 "bdev_nvme_get_path_iostat", 00:05:22.122 "bdev_nvme_get_mdns_discovery_info", 00:05:22.122 "bdev_nvme_stop_mdns_discovery", 00:05:22.122 "bdev_nvme_start_mdns_discovery", 00:05:22.122 "bdev_nvme_set_multipath_policy", 00:05:22.122 "bdev_nvme_set_preferred_path", 00:05:22.122 "bdev_nvme_get_io_paths", 00:05:22.122 "bdev_nvme_remove_error_injection", 00:05:22.122 "bdev_nvme_add_error_injection", 00:05:22.122 "bdev_nvme_get_discovery_info", 00:05:22.122 "bdev_nvme_stop_discovery", 00:05:22.122 "bdev_nvme_start_discovery", 00:05:22.122 "bdev_nvme_get_controller_health_info", 00:05:22.122 "bdev_nvme_disable_controller", 00:05:22.122 "bdev_nvme_enable_controller", 00:05:22.122 "bdev_nvme_reset_controller", 00:05:22.122 "bdev_nvme_get_transport_statistics", 00:05:22.122 "bdev_nvme_apply_firmware", 00:05:22.122 "bdev_nvme_detach_controller", 00:05:22.122 "bdev_nvme_get_controllers", 00:05:22.122 "bdev_nvme_attach_controller", 00:05:22.122 "bdev_nvme_set_hotplug", 00:05:22.122 "bdev_nvme_set_options", 00:05:22.122 "bdev_passthru_delete", 00:05:22.122 "bdev_passthru_create", 00:05:22.122 "bdev_lvol_set_parent_bdev", 00:05:22.122 "bdev_lvol_set_parent", 00:05:22.122 "bdev_lvol_check_shallow_copy", 00:05:22.122 "bdev_lvol_start_shallow_copy", 00:05:22.122 "bdev_lvol_grow_lvstore", 00:05:22.122 "bdev_lvol_get_lvols", 00:05:22.122 "bdev_lvol_get_lvstores", 00:05:22.122 "bdev_lvol_delete", 00:05:22.123 "bdev_lvol_set_read_only", 00:05:22.123 "bdev_lvol_resize", 00:05:22.123 "bdev_lvol_decouple_parent", 00:05:22.123 "bdev_lvol_inflate", 00:05:22.123 "bdev_lvol_rename", 00:05:22.123 "bdev_lvol_clone_bdev", 00:05:22.123 "bdev_lvol_clone", 00:05:22.123 "bdev_lvol_snapshot", 00:05:22.123 "bdev_lvol_create", 00:05:22.123 "bdev_lvol_delete_lvstore", 00:05:22.123 "bdev_lvol_rename_lvstore", 00:05:22.123 "bdev_lvol_create_lvstore", 00:05:22.123 "bdev_raid_set_options", 00:05:22.123 "bdev_raid_remove_base_bdev", 00:05:22.123 "bdev_raid_add_base_bdev", 00:05:22.123 "bdev_raid_delete", 00:05:22.123 "bdev_raid_create", 00:05:22.123 "bdev_raid_get_bdevs", 00:05:22.123 "bdev_error_inject_error", 00:05:22.123 "bdev_error_delete", 00:05:22.123 "bdev_error_create", 00:05:22.123 "bdev_split_delete", 00:05:22.123 "bdev_split_create", 00:05:22.123 "bdev_delay_delete", 00:05:22.123 "bdev_delay_create", 00:05:22.123 "bdev_delay_update_latency", 00:05:22.123 "bdev_zone_block_delete", 00:05:22.123 "bdev_zone_block_create", 00:05:22.123 "blobfs_create", 00:05:22.123 "blobfs_detect", 00:05:22.123 "blobfs_set_cache_size", 00:05:22.123 "bdev_aio_delete", 00:05:22.123 "bdev_aio_rescan", 00:05:22.123 "bdev_aio_create", 00:05:22.123 "bdev_ftl_set_property", 00:05:22.123 "bdev_ftl_get_properties", 00:05:22.123 "bdev_ftl_get_stats", 00:05:22.123 "bdev_ftl_unmap", 00:05:22.123 "bdev_ftl_unload", 00:05:22.123 "bdev_ftl_delete", 00:05:22.123 "bdev_ftl_load", 00:05:22.123 "bdev_ftl_create", 00:05:22.123 "bdev_virtio_attach_controller", 00:05:22.123 "bdev_virtio_scsi_get_devices", 00:05:22.123 "bdev_virtio_detach_controller", 00:05:22.123 "bdev_virtio_blk_set_hotplug", 00:05:22.123 "bdev_iscsi_delete", 00:05:22.123 "bdev_iscsi_create", 00:05:22.123 "bdev_iscsi_set_options", 00:05:22.123 "accel_error_inject_error", 00:05:22.123 "ioat_scan_accel_module", 00:05:22.123 "dsa_scan_accel_module", 00:05:22.123 "iaa_scan_accel_module", 00:05:22.123 "vfu_virtio_create_scsi_endpoint", 00:05:22.123 "vfu_virtio_scsi_remove_target", 00:05:22.123 "vfu_virtio_scsi_add_target", 00:05:22.123 "vfu_virtio_create_blk_endpoint", 00:05:22.123 "vfu_virtio_delete_endpoint", 00:05:22.123 "keyring_file_remove_key", 00:05:22.123 "keyring_file_add_key", 00:05:22.123 "keyring_linux_set_options", 00:05:22.123 "iscsi_get_histogram", 00:05:22.123 "iscsi_enable_histogram", 00:05:22.123 "iscsi_set_options", 00:05:22.123 "iscsi_get_auth_groups", 00:05:22.123 "iscsi_auth_group_remove_secret", 00:05:22.123 "iscsi_auth_group_add_secret", 00:05:22.123 "iscsi_delete_auth_group", 00:05:22.123 "iscsi_create_auth_group", 00:05:22.123 "iscsi_set_discovery_auth", 00:05:22.123 "iscsi_get_options", 00:05:22.123 "iscsi_target_node_request_logout", 00:05:22.123 "iscsi_target_node_set_redirect", 00:05:22.123 "iscsi_target_node_set_auth", 00:05:22.123 "iscsi_target_node_add_lun", 00:05:22.123 "iscsi_get_stats", 00:05:22.123 "iscsi_get_connections", 00:05:22.123 "iscsi_portal_group_set_auth", 00:05:22.123 "iscsi_start_portal_group", 00:05:22.123 "iscsi_delete_portal_group", 00:05:22.123 "iscsi_create_portal_group", 00:05:22.123 "iscsi_get_portal_groups", 00:05:22.123 "iscsi_delete_target_node", 00:05:22.123 "iscsi_target_node_remove_pg_ig_maps", 00:05:22.123 "iscsi_target_node_add_pg_ig_maps", 00:05:22.123 "iscsi_create_target_node", 00:05:22.123 "iscsi_get_target_nodes", 00:05:22.123 "iscsi_delete_initiator_group", 00:05:22.123 "iscsi_initiator_group_remove_initiators", 00:05:22.123 "iscsi_initiator_group_add_initiators", 00:05:22.123 "iscsi_create_initiator_group", 00:05:22.123 "iscsi_get_initiator_groups", 00:05:22.123 "nvmf_set_crdt", 00:05:22.123 "nvmf_set_config", 00:05:22.123 "nvmf_set_max_subsystems", 00:05:22.123 "nvmf_stop_mdns_prr", 00:05:22.123 "nvmf_publish_mdns_prr", 00:05:22.123 "nvmf_subsystem_get_listeners", 00:05:22.123 "nvmf_subsystem_get_qpairs", 00:05:22.123 "nvmf_subsystem_get_controllers", 00:05:22.123 "nvmf_get_stats", 00:05:22.123 "nvmf_get_transports", 00:05:22.123 "nvmf_create_transport", 00:05:22.123 "nvmf_get_targets", 00:05:22.123 "nvmf_delete_target", 00:05:22.123 "nvmf_create_target", 00:05:22.123 "nvmf_subsystem_allow_any_host", 00:05:22.123 "nvmf_subsystem_remove_host", 00:05:22.123 "nvmf_subsystem_add_host", 00:05:22.123 "nvmf_ns_remove_host", 00:05:22.123 "nvmf_ns_add_host", 00:05:22.123 "nvmf_subsystem_remove_ns", 00:05:22.123 "nvmf_subsystem_add_ns", 00:05:22.123 "nvmf_subsystem_listener_set_ana_state", 00:05:22.123 "nvmf_discovery_get_referrals", 00:05:22.123 "nvmf_discovery_remove_referral", 00:05:22.123 "nvmf_discovery_add_referral", 00:05:22.123 "nvmf_subsystem_remove_listener", 00:05:22.123 "nvmf_subsystem_add_listener", 00:05:22.123 "nvmf_delete_subsystem", 00:05:22.123 "nvmf_create_subsystem", 00:05:22.123 "nvmf_get_subsystems", 00:05:22.123 "env_dpdk_get_mem_stats", 00:05:22.123 "nbd_get_disks", 00:05:22.123 "nbd_stop_disk", 00:05:22.123 "nbd_start_disk", 00:05:22.123 "ublk_recover_disk", 00:05:22.123 "ublk_get_disks", 00:05:22.123 "ublk_stop_disk", 00:05:22.123 "ublk_start_disk", 00:05:22.123 "ublk_destroy_target", 00:05:22.123 "ublk_create_target", 00:05:22.123 "virtio_blk_create_transport", 00:05:22.123 "virtio_blk_get_transports", 00:05:22.123 "vhost_controller_set_coalescing", 00:05:22.123 "vhost_get_controllers", 00:05:22.123 "vhost_delete_controller", 00:05:22.123 "vhost_create_blk_controller", 00:05:22.123 "vhost_scsi_controller_remove_target", 00:05:22.123 "vhost_scsi_controller_add_target", 00:05:22.123 "vhost_start_scsi_controller", 00:05:22.123 "vhost_create_scsi_controller", 00:05:22.123 "thread_set_cpumask", 00:05:22.123 "framework_get_governor", 00:05:22.123 "framework_get_scheduler", 00:05:22.123 "framework_set_scheduler", 00:05:22.123 "framework_get_reactors", 00:05:22.123 "thread_get_io_channels", 00:05:22.123 "thread_get_pollers", 00:05:22.123 "thread_get_stats", 00:05:22.123 "framework_monitor_context_switch", 00:05:22.123 "spdk_kill_instance", 00:05:22.123 "log_enable_timestamps", 00:05:22.123 "log_get_flags", 00:05:22.123 "log_clear_flag", 00:05:22.123 "log_set_flag", 00:05:22.123 "log_get_level", 00:05:22.123 "log_set_level", 00:05:22.123 "log_get_print_level", 00:05:22.123 "log_set_print_level", 00:05:22.123 "framework_enable_cpumask_locks", 00:05:22.123 "framework_disable_cpumask_locks", 00:05:22.123 "framework_wait_init", 00:05:22.123 "framework_start_init", 00:05:22.123 "scsi_get_devices", 00:05:22.123 "bdev_get_histogram", 00:05:22.123 "bdev_enable_histogram", 00:05:22.123 "bdev_set_qos_limit", 00:05:22.123 "bdev_set_qd_sampling_period", 00:05:22.123 "bdev_get_bdevs", 00:05:22.123 "bdev_reset_iostat", 00:05:22.123 "bdev_get_iostat", 00:05:22.123 "bdev_examine", 00:05:22.123 "bdev_wait_for_examine", 00:05:22.123 "bdev_set_options", 00:05:22.123 "notify_get_notifications", 00:05:22.123 "notify_get_types", 00:05:22.123 "accel_get_stats", 00:05:22.123 "accel_set_options", 00:05:22.123 "accel_set_driver", 00:05:22.123 "accel_crypto_key_destroy", 00:05:22.123 "accel_crypto_keys_get", 00:05:22.123 "accel_crypto_key_create", 00:05:22.123 "accel_assign_opc", 00:05:22.123 "accel_get_module_info", 00:05:22.123 "accel_get_opc_assignments", 00:05:22.123 "vmd_rescan", 00:05:22.123 "vmd_remove_device", 00:05:22.123 "vmd_enable", 00:05:22.123 "sock_get_default_impl", 00:05:22.123 "sock_set_default_impl", 00:05:22.123 "sock_impl_set_options", 00:05:22.123 "sock_impl_get_options", 00:05:22.123 "iobuf_get_stats", 00:05:22.123 "iobuf_set_options", 00:05:22.123 "keyring_get_keys", 00:05:22.123 "framework_get_pci_devices", 00:05:22.123 "framework_get_config", 00:05:22.123 "framework_get_subsystems", 00:05:22.123 "vfu_tgt_set_base_path", 00:05:22.123 "trace_get_info", 00:05:22.123 "trace_get_tpoint_group_mask", 00:05:22.123 "trace_disable_tpoint_group", 00:05:22.123 "trace_enable_tpoint_group", 00:05:22.123 "trace_clear_tpoint_mask", 00:05:22.123 "trace_set_tpoint_mask", 00:05:22.123 "spdk_get_version", 00:05:22.123 "rpc_get_methods" 00:05:22.123 ] 00:05:22.123 23:11:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:22.123 23:11:19 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:22.123 23:11:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.123 23:11:19 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:22.123 23:11:19 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1254430 00:05:22.123 23:11:19 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1254430 ']' 00:05:22.123 23:11:19 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1254430 00:05:22.123 23:11:19 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:22.123 23:11:19 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.123 23:11:19 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1254430 00:05:22.124 23:11:19 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.124 23:11:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.124 23:11:19 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1254430' 00:05:22.124 killing process with pid 1254430 00:05:22.124 23:11:19 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1254430 00:05:22.124 23:11:19 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1254430 00:05:22.382 00:05:22.382 real 0m1.208s 00:05:22.382 user 0m2.178s 00:05:22.382 sys 0m0.441s 00:05:22.382 23:11:20 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.382 23:11:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.382 ************************************ 00:05:22.382 END TEST spdkcli_tcp 00:05:22.382 ************************************ 00:05:22.382 23:11:20 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:22.382 23:11:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.382 23:11:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.382 23:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:22.382 ************************************ 00:05:22.382 START TEST dpdk_mem_utility 00:05:22.382 ************************************ 00:05:22.382 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:22.640 * Looking for test storage... 00:05:22.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:22.640 23:11:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:22.640 23:11:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1254634 00:05:22.640 23:11:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.640 23:11:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1254634 00:05:22.640 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1254634 ']' 00:05:22.640 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.640 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.640 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.640 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.640 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:22.640 [2024-07-25 23:11:20.200967] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:22.640 [2024-07-25 23:11:20.201071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254634 ] 00:05:22.640 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.640 [2024-07-25 23:11:20.232206] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:22.640 [2024-07-25 23:11:20.263787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.640 [2024-07-25 23:11:20.355585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.899 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.899 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:22.899 23:11:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:22.899 23:11:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:22.899 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.899 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:22.899 { 00:05:22.899 "filename": "/tmp/spdk_mem_dump.txt" 00:05:22.899 } 00:05:22.899 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.899 23:11:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:23.158 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:23.158 1 heaps totaling size 814.000000 MiB 00:05:23.158 size: 814.000000 MiB heap id: 0 00:05:23.158 end heaps---------- 00:05:23.158 8 mempools totaling size 598.116089 MiB 00:05:23.158 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:23.158 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:23.158 size: 84.521057 MiB name: bdev_io_1254634 00:05:23.158 size: 51.011292 MiB name: evtpool_1254634 00:05:23.158 size: 50.003479 MiB name: msgpool_1254634 00:05:23.158 size: 21.763794 MiB name: PDU_Pool 00:05:23.158 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:23.158 size: 0.026123 MiB name: Session_Pool 00:05:23.158 end mempools------- 00:05:23.158 6 memzones totaling size 4.142822 MiB 00:05:23.158 size: 1.000366 MiB name: RG_ring_0_1254634 00:05:23.158 size: 1.000366 MiB name: RG_ring_1_1254634 00:05:23.158 size: 1.000366 MiB name: RG_ring_4_1254634 00:05:23.158 size: 1.000366 MiB name: RG_ring_5_1254634 00:05:23.158 size: 0.125366 MiB name: RG_ring_2_1254634 00:05:23.158 size: 0.015991 MiB name: RG_ring_3_1254634 00:05:23.158 end memzones------- 00:05:23.158 23:11:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:23.158 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:23.158 list of free elements. size: 12.519348 MiB 00:05:23.158 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:23.158 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:23.158 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:23.158 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:23.158 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:23.158 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:23.158 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:23.158 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:23.158 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:23.158 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:23.158 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:23.158 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:23.158 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:23.158 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:23.158 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:23.158 list of standard malloc elements. size: 199.218079 MiB 00:05:23.158 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:23.158 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:23.158 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:23.158 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:23.158 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:23.158 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:23.158 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:23.158 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:23.158 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:23.158 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:23.158 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:23.158 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:23.158 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:23.158 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:23.158 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:23.158 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:23.158 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:23.158 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:23.158 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:23.158 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:23.158 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:23.158 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:23.158 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:23.158 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:23.158 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:23.158 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:23.158 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:23.158 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:23.158 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:23.158 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:23.158 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:23.158 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:23.158 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:23.158 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:23.158 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:23.158 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:23.158 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:23.158 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:23.158 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:23.158 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:23.158 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:23.158 list of memzone associated elements. size: 602.262573 MiB 00:05:23.158 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:23.158 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:23.158 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:23.158 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:23.158 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:23.158 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1254634_0 00:05:23.158 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:23.158 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1254634_0 00:05:23.158 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:23.158 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1254634_0 00:05:23.158 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:23.158 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:23.158 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:23.158 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:23.158 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:23.158 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1254634 00:05:23.158 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:23.158 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1254634 00:05:23.158 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:23.158 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1254634 00:05:23.158 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:23.158 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:23.158 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:23.158 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:23.158 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:23.158 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:23.158 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:23.158 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:23.158 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:23.158 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1254634 00:05:23.159 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:23.159 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1254634 00:05:23.159 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:23.159 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1254634 00:05:23.159 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:23.159 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1254634 00:05:23.159 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:23.159 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1254634 00:05:23.159 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:23.159 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:23.159 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:23.159 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:23.159 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:23.159 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:23.159 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:23.159 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1254634 00:05:23.159 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:23.159 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:23.159 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:23.159 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:23.159 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:23.159 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1254634 00:05:23.159 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:23.159 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:23.159 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:23.159 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1254634 00:05:23.159 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:23.159 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1254634 00:05:23.159 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:23.159 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:23.159 23:11:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:23.159 23:11:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1254634 00:05:23.159 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1254634 ']' 00:05:23.159 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1254634 00:05:23.159 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:23.159 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.159 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1254634 00:05:23.159 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:23.159 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:23.159 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1254634' 00:05:23.159 killing process with pid 1254634 00:05:23.159 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1254634 00:05:23.159 23:11:20 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1254634 00:05:23.725 00:05:23.725 real 0m1.058s 00:05:23.725 user 0m1.048s 00:05:23.725 sys 0m0.424s 00:05:23.725 23:11:21 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.725 23:11:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.725 ************************************ 00:05:23.725 END TEST dpdk_mem_utility 00:05:23.725 ************************************ 00:05:23.725 23:11:21 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:23.725 23:11:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.725 23:11:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.725 23:11:21 -- common/autotest_common.sh@10 -- # set +x 00:05:23.725 ************************************ 00:05:23.725 START TEST event 00:05:23.725 ************************************ 00:05:23.725 23:11:21 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:23.725 * Looking for test storage... 00:05:23.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:23.725 23:11:21 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:23.725 23:11:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:23.725 23:11:21 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:23.725 23:11:21 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:23.725 23:11:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.725 23:11:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.725 ************************************ 00:05:23.725 START TEST event_perf 00:05:23.725 ************************************ 00:05:23.725 23:11:21 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:23.725 Running I/O for 1 seconds...[2024-07-25 23:11:21.291724] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:23.725 [2024-07-25 23:11:21.291788] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254822 ] 00:05:23.725 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.725 [2024-07-25 23:11:21.323430] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:23.725 [2024-07-25 23:11:21.355039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:23.725 [2024-07-25 23:11:21.447370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.725 [2024-07-25 23:11:21.447427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.725 [2024-07-25 23:11:21.447492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.725 [2024-07-25 23:11:21.447495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.098 Running I/O for 1 seconds... 00:05:25.098 lcore 0: 233818 00:05:25.098 lcore 1: 233816 00:05:25.098 lcore 2: 233817 00:05:25.098 lcore 3: 233818 00:05:25.098 done. 00:05:25.098 00:05:25.098 real 0m1.252s 00:05:25.098 user 0m4.161s 00:05:25.098 sys 0m0.085s 00:05:25.098 23:11:22 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.098 23:11:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:25.098 ************************************ 00:05:25.098 END TEST event_perf 00:05:25.098 ************************************ 00:05:25.098 23:11:22 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:25.098 23:11:22 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:25.098 23:11:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.098 23:11:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.098 ************************************ 00:05:25.098 START TEST event_reactor 00:05:25.098 ************************************ 00:05:25.098 23:11:22 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:25.098 [2024-07-25 23:11:22.596398] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:25.098 [2024-07-25 23:11:22.596460] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254980 ] 00:05:25.098 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.098 [2024-07-25 23:11:22.628436] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:25.098 [2024-07-25 23:11:22.660869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.098 [2024-07-25 23:11:22.751929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.471 test_start 00:05:26.471 oneshot 00:05:26.471 tick 100 00:05:26.471 tick 100 00:05:26.471 tick 250 00:05:26.471 tick 100 00:05:26.471 tick 100 00:05:26.471 tick 100 00:05:26.471 tick 250 00:05:26.471 tick 500 00:05:26.471 tick 100 00:05:26.471 tick 100 00:05:26.471 tick 250 00:05:26.471 tick 100 00:05:26.471 tick 100 00:05:26.471 test_end 00:05:26.471 00:05:26.471 real 0m1.252s 00:05:26.471 user 0m1.164s 00:05:26.471 sys 0m0.083s 00:05:26.471 23:11:23 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.471 23:11:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:26.471 ************************************ 00:05:26.471 END TEST event_reactor 00:05:26.471 ************************************ 00:05:26.471 23:11:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:26.471 23:11:23 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:26.471 23:11:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.471 23:11:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.471 ************************************ 00:05:26.471 START TEST event_reactor_perf 00:05:26.471 ************************************ 00:05:26.471 23:11:23 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:26.471 [2024-07-25 23:11:23.899025] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:26.471 [2024-07-25 23:11:23.899118] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255138 ] 00:05:26.471 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.471 [2024-07-25 23:11:23.932036] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:26.471 [2024-07-25 23:11:23.963924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.471 [2024-07-25 23:11:24.051194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.405 test_start 00:05:27.405 test_end 00:05:27.405 Performance: 355740 events per second 00:05:27.405 00:05:27.405 real 0m1.244s 00:05:27.405 user 0m1.165s 00:05:27.405 sys 0m0.074s 00:05:27.405 23:11:25 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.405 23:11:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.405 ************************************ 00:05:27.405 END TEST event_reactor_perf 00:05:27.405 ************************************ 00:05:27.664 23:11:25 event -- event/event.sh@49 -- # uname -s 00:05:27.664 23:11:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:27.664 23:11:25 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:27.664 23:11:25 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.664 23:11:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.664 23:11:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.664 ************************************ 00:05:27.664 START TEST event_scheduler 00:05:27.664 ************************************ 00:05:27.664 23:11:25 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:27.664 * Looking for test storage... 00:05:27.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:27.664 23:11:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:27.664 23:11:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1255323 00:05:27.664 23:11:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:27.664 23:11:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.664 23:11:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1255323 00:05:27.664 23:11:25 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1255323 ']' 00:05:27.664 23:11:25 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.664 23:11:25 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.664 23:11:25 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.664 23:11:25 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.664 23:11:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.664 [2024-07-25 23:11:25.274546] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:27.664 [2024-07-25 23:11:25.274629] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255323 ] 00:05:27.664 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.664 [2024-07-25 23:11:25.306091] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:27.664 [2024-07-25 23:11:25.333155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:27.922 [2024-07-25 23:11:25.420958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.922 [2024-07-25 23:11:25.421018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.922 [2024-07-25 23:11:25.421085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.922 [2024-07-25 23:11:25.421088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.922 23:11:25 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.922 23:11:25 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:27.922 23:11:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:27.922 23:11:25 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.922 23:11:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.922 [2024-07-25 23:11:25.481876] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:27.922 [2024-07-25 23:11:25.481902] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:27.922 [2024-07-25 23:11:25.481919] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:27.922 [2024-07-25 23:11:25.481929] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:27.922 [2024-07-25 23:11:25.481939] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:27.922 23:11:25 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.922 23:11:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:27.922 23:11:25 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.922 23:11:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.922 [2024-07-25 23:11:25.580996] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:27.922 23:11:25 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.922 23:11:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:27.922 23:11:25 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.922 23:11:25 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.922 23:11:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.922 ************************************ 00:05:27.922 START TEST scheduler_create_thread 00:05:27.922 ************************************ 00:05:27.922 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:27.922 23:11:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:27.922 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.922 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.922 2 00:05:27.922 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.922 23:11:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:27.922 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.922 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.922 3 00:05:27.923 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.923 23:11:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:27.923 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.923 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.923 4 00:05:27.923 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.923 23:11:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:27.923 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.923 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.923 5 00:05:27.923 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.923 23:11:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:27.923 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.923 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.181 6 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.181 7 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.181 8 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.181 9 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.181 10 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.181 23:11:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.746 23:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.746 00:05:28.746 real 0m0.589s 00:05:28.746 user 0m0.009s 00:05:28.746 sys 0m0.004s 00:05:28.746 23:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.746 23:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.746 ************************************ 00:05:28.746 END TEST scheduler_create_thread 00:05:28.746 ************************************ 00:05:28.746 23:11:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:28.746 23:11:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1255323 00:05:28.746 23:11:26 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1255323 ']' 00:05:28.746 23:11:26 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1255323 00:05:28.746 23:11:26 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:28.746 23:11:26 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.746 23:11:26 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1255323 00:05:28.746 23:11:26 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:28.746 23:11:26 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:28.746 23:11:26 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1255323' 00:05:28.746 killing process with pid 1255323 00:05:28.746 23:11:26 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1255323 00:05:28.746 23:11:26 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1255323 00:05:29.004 [2024-07-25 23:11:26.677047] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:29.262 00:05:29.262 real 0m1.698s 00:05:29.262 user 0m2.162s 00:05:29.262 sys 0m0.329s 00:05:29.262 23:11:26 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.262 23:11:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:29.262 ************************************ 00:05:29.262 END TEST event_scheduler 00:05:29.262 ************************************ 00:05:29.262 23:11:26 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:29.262 23:11:26 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:29.262 23:11:26 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.262 23:11:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.262 23:11:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.262 ************************************ 00:05:29.262 START TEST app_repeat 00:05:29.262 ************************************ 00:05:29.262 23:11:26 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:29.262 23:11:26 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.262 23:11:26 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.262 23:11:26 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:29.262 23:11:26 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.262 23:11:26 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:29.262 23:11:26 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:29.262 23:11:26 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:29.262 23:11:26 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1255632 00:05:29.262 23:11:26 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:29.262 23:11:26 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.262 23:11:26 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1255632' 00:05:29.262 Process app_repeat pid: 1255632 00:05:29.262 23:11:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.262 23:11:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:29.262 spdk_app_start Round 0 00:05:29.262 23:11:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1255632 /var/tmp/spdk-nbd.sock 00:05:29.263 23:11:26 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1255632 ']' 00:05:29.263 23:11:26 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.263 23:11:26 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.263 23:11:26 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.263 23:11:26 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.263 23:11:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.263 [2024-07-25 23:11:26.957135] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:29.263 [2024-07-25 23:11:26.957197] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255632 ] 00:05:29.263 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.520 [2024-07-25 23:11:26.991053] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:29.521 [2024-07-25 23:11:27.021724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.521 [2024-07-25 23:11:27.120085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.521 [2024-07-25 23:11:27.120098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.521 23:11:27 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.521 23:11:27 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:29.521 23:11:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.779 Malloc0 00:05:29.779 23:11:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.037 Malloc1 00:05:30.294 23:11:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.294 23:11:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.294 23:11:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.294 23:11:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.294 23:11:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.294 23:11:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.294 23:11:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.294 23:11:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.294 23:11:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.294 23:11:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.294 23:11:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.294 23:11:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.294 23:11:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.294 23:11:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.294 23:11:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.294 23:11:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.552 /dev/nbd0 00:05:30.552 23:11:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.552 23:11:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.552 23:11:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:30.552 23:11:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:30.552 23:11:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:30.552 23:11:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:30.552 23:11:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:30.552 23:11:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:30.552 23:11:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:30.552 23:11:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:30.552 23:11:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.552 1+0 records in 00:05:30.552 1+0 records out 00:05:30.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170441 s, 24.0 MB/s 00:05:30.552 23:11:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.552 23:11:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:30.552 23:11:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.552 23:11:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:30.552 23:11:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:30.552 23:11:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.552 23:11:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.552 23:11:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.810 /dev/nbd1 00:05:30.810 23:11:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.810 23:11:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.810 23:11:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:30.810 23:11:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:30.810 23:11:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:30.810 23:11:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:30.810 23:11:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:30.810 23:11:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:30.810 23:11:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:30.810 23:11:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:30.810 23:11:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.810 1+0 records in 00:05:30.810 1+0 records out 00:05:30.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158595 s, 25.8 MB/s 00:05:30.810 23:11:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.810 23:11:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:30.810 23:11:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.810 23:11:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:30.810 23:11:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:30.810 23:11:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.810 23:11:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.810 23:11:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.810 23:11:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.810 23:11:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.068 23:11:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.068 { 00:05:31.068 "nbd_device": "/dev/nbd0", 00:05:31.068 "bdev_name": "Malloc0" 00:05:31.068 }, 00:05:31.068 { 00:05:31.068 "nbd_device": "/dev/nbd1", 00:05:31.068 "bdev_name": "Malloc1" 00:05:31.068 } 00:05:31.068 ]' 00:05:31.068 23:11:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.068 { 00:05:31.068 "nbd_device": "/dev/nbd0", 00:05:31.068 "bdev_name": "Malloc0" 00:05:31.068 }, 00:05:31.068 { 00:05:31.068 "nbd_device": "/dev/nbd1", 00:05:31.068 "bdev_name": "Malloc1" 00:05:31.068 } 00:05:31.068 ]' 00:05:31.068 23:11:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.068 23:11:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.068 /dev/nbd1' 00:05:31.068 23:11:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.068 /dev/nbd1' 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.069 256+0 records in 00:05:31.069 256+0 records out 00:05:31.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00485838 s, 216 MB/s 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.069 256+0 records in 00:05:31.069 256+0 records out 00:05:31.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273941 s, 38.3 MB/s 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.069 256+0 records in 00:05:31.069 256+0 records out 00:05:31.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265192 s, 39.5 MB/s 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.069 23:11:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.344 23:11:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.344 23:11:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.344 23:11:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.344 23:11:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.344 23:11:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.344 23:11:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.344 23:11:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.344 23:11:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.344 23:11:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.344 23:11:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.619 23:11:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.619 23:11:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.619 23:11:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.619 23:11:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.619 23:11:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.619 23:11:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.619 23:11:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.619 23:11:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.619 23:11:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.619 23:11:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.619 23:11:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.877 23:11:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.877 23:11:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.877 23:11:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.877 23:11:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.877 23:11:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.877 23:11:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.877 23:11:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.877 23:11:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.877 23:11:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.877 23:11:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.877 23:11:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.877 23:11:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.877 23:11:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.135 23:11:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.393 [2024-07-25 23:11:30.052597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.651 [2024-07-25 23:11:30.144150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.651 [2024-07-25 23:11:30.144155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.651 [2024-07-25 23:11:30.200557] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.651 [2024-07-25 23:11:30.200625] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.175 23:11:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.175 23:11:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:35.175 spdk_app_start Round 1 00:05:35.175 23:11:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1255632 /var/tmp/spdk-nbd.sock 00:05:35.175 23:11:32 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1255632 ']' 00:05:35.175 23:11:32 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.175 23:11:32 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.175 23:11:32 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.175 23:11:32 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.175 23:11:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.432 23:11:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.432 23:11:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:35.432 23:11:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.690 Malloc0 00:05:35.690 23:11:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.948 Malloc1 00:05:35.948 23:11:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.948 23:11:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.948 23:11:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.948 23:11:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.948 23:11:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.948 23:11:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.948 23:11:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.948 23:11:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.948 23:11:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.948 23:11:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.948 23:11:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.948 23:11:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.948 23:11:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.948 23:11:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.948 23:11:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.948 23:11:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.205 /dev/nbd0 00:05:36.205 23:11:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.205 23:11:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.205 23:11:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:36.205 23:11:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:36.205 23:11:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:36.205 23:11:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:36.205 23:11:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:36.205 23:11:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:36.205 23:11:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:36.205 23:11:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:36.205 23:11:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.205 1+0 records in 00:05:36.205 1+0 records out 00:05:36.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207534 s, 19.7 MB/s 00:05:36.205 23:11:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.205 23:11:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:36.205 23:11:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.205 23:11:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:36.205 23:11:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:36.205 23:11:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.205 23:11:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.205 23:11:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.462 /dev/nbd1 00:05:36.462 23:11:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.462 23:11:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.462 23:11:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:36.462 23:11:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:36.462 23:11:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:36.462 23:11:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:36.462 23:11:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:36.462 23:11:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:36.462 23:11:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:36.462 23:11:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:36.462 23:11:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.462 1+0 records in 00:05:36.462 1+0 records out 00:05:36.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205848 s, 19.9 MB/s 00:05:36.462 23:11:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.462 23:11:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:36.462 23:11:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.462 23:11:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:36.462 23:11:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:36.462 23:11:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.462 23:11:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.462 23:11:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.462 23:11:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.462 23:11:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.719 23:11:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.719 { 00:05:36.719 "nbd_device": "/dev/nbd0", 00:05:36.719 "bdev_name": "Malloc0" 00:05:36.719 }, 00:05:36.719 { 00:05:36.719 "nbd_device": "/dev/nbd1", 00:05:36.719 "bdev_name": "Malloc1" 00:05:36.719 } 00:05:36.719 ]' 00:05:36.719 23:11:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.719 { 00:05:36.719 "nbd_device": "/dev/nbd0", 00:05:36.719 "bdev_name": "Malloc0" 00:05:36.719 }, 00:05:36.719 { 00:05:36.719 "nbd_device": "/dev/nbd1", 00:05:36.719 "bdev_name": "Malloc1" 00:05:36.719 } 00:05:36.719 ]' 00:05:36.719 23:11:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.976 /dev/nbd1' 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.976 /dev/nbd1' 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.976 256+0 records in 00:05:36.976 256+0 records out 00:05:36.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499723 s, 210 MB/s 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.976 256+0 records in 00:05:36.976 256+0 records out 00:05:36.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237433 s, 44.2 MB/s 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.976 256+0 records in 00:05:36.976 256+0 records out 00:05:36.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289246 s, 36.3 MB/s 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.976 23:11:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.977 23:11:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.977 23:11:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.977 23:11:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.977 23:11:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.977 23:11:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.977 23:11:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.977 23:11:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.977 23:11:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.977 23:11:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.977 23:11:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.977 23:11:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.977 23:11:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.977 23:11:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.977 23:11:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.977 23:11:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.234 23:11:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.234 23:11:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.234 23:11:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.234 23:11:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.234 23:11:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.234 23:11:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.234 23:11:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.234 23:11:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.234 23:11:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.234 23:11:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.491 23:11:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.491 23:11:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.491 23:11:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.491 23:11:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.491 23:11:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.491 23:11:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.491 23:11:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.492 23:11:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.492 23:11:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.492 23:11:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.492 23:11:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.749 23:11:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.749 23:11:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.749 23:11:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.749 23:11:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.749 23:11:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.749 23:11:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.749 23:11:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.749 23:11:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.749 23:11:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.749 23:11:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.749 23:11:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.749 23:11:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.749 23:11:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.007 23:11:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.265 [2024-07-25 23:11:35.866306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.265 [2024-07-25 23:11:35.955631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.265 [2024-07-25 23:11:35.955636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.523 [2024-07-25 23:11:36.018177] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.523 [2024-07-25 23:11:36.018247] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.050 23:11:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:41.050 23:11:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:41.050 spdk_app_start Round 2 00:05:41.050 23:11:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1255632 /var/tmp/spdk-nbd.sock 00:05:41.050 23:11:38 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1255632 ']' 00:05:41.050 23:11:38 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.050 23:11:38 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.050 23:11:38 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.050 23:11:38 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.050 23:11:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.307 23:11:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.307 23:11:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:41.307 23:11:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.565 Malloc0 00:05:41.565 23:11:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.823 Malloc1 00:05:41.823 23:11:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.823 23:11:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.823 23:11:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.823 23:11:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.823 23:11:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.823 23:11:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.823 23:11:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.823 23:11:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.823 23:11:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.823 23:11:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.823 23:11:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.823 23:11:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.823 23:11:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:41.823 23:11:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.823 23:11:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.823 23:11:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:42.079 /dev/nbd0 00:05:42.079 23:11:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.079 23:11:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.079 23:11:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:42.079 23:11:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:42.079 23:11:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:42.079 23:11:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:42.079 23:11:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:42.079 23:11:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:42.079 23:11:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:42.079 23:11:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:42.079 23:11:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.079 1+0 records in 00:05:42.080 1+0 records out 00:05:42.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208473 s, 19.6 MB/s 00:05:42.080 23:11:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.080 23:11:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:42.080 23:11:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.080 23:11:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:42.080 23:11:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:42.080 23:11:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.080 23:11:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.080 23:11:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:42.338 /dev/nbd1 00:05:42.338 23:11:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.338 23:11:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.338 23:11:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:42.338 23:11:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:42.338 23:11:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:42.338 23:11:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:42.338 23:11:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:42.338 23:11:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:42.338 23:11:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:42.338 23:11:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:42.338 23:11:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.338 1+0 records in 00:05:42.338 1+0 records out 00:05:42.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189855 s, 21.6 MB/s 00:05:42.338 23:11:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.338 23:11:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:42.338 23:11:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.338 23:11:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:42.338 23:11:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:42.338 23:11:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.338 23:11:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.338 23:11:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.338 23:11:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.338 23:11:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.596 { 00:05:42.596 "nbd_device": "/dev/nbd0", 00:05:42.596 "bdev_name": "Malloc0" 00:05:42.596 }, 00:05:42.596 { 00:05:42.596 "nbd_device": "/dev/nbd1", 00:05:42.596 "bdev_name": "Malloc1" 00:05:42.596 } 00:05:42.596 ]' 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.596 { 00:05:42.596 "nbd_device": "/dev/nbd0", 00:05:42.596 "bdev_name": "Malloc0" 00:05:42.596 }, 00:05:42.596 { 00:05:42.596 "nbd_device": "/dev/nbd1", 00:05:42.596 "bdev_name": "Malloc1" 00:05:42.596 } 00:05:42.596 ]' 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.596 /dev/nbd1' 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.596 /dev/nbd1' 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.596 256+0 records in 00:05:42.596 256+0 records out 00:05:42.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00371901 s, 282 MB/s 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.596 23:11:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.854 256+0 records in 00:05:42.854 256+0 records out 00:05:42.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273255 s, 38.4 MB/s 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.854 256+0 records in 00:05:42.854 256+0 records out 00:05:42.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233651 s, 44.9 MB/s 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.854 23:11:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.111 23:11:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.111 23:11:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.111 23:11:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.111 23:11:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.111 23:11:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.111 23:11:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.111 23:11:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.111 23:11:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.111 23:11:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.111 23:11:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.369 23:11:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.369 23:11:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.369 23:11:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.369 23:11:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.369 23:11:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.369 23:11:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.369 23:11:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.369 23:11:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.369 23:11:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.369 23:11:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.369 23:11:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.627 23:11:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.627 23:11:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.627 23:11:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.627 23:11:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.627 23:11:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.627 23:11:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.627 23:11:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:43.627 23:11:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.627 23:11:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.627 23:11:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.627 23:11:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.627 23:11:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.627 23:11:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:43.884 23:11:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:44.142 [2024-07-25 23:11:41.715093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.142 [2024-07-25 23:11:41.804274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.143 [2024-07-25 23:11:41.804279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.143 [2024-07-25 23:11:41.866632] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.143 [2024-07-25 23:11:41.866699] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.416 23:11:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1255632 /var/tmp/spdk-nbd.sock 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1255632 ']' 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:47.416 23:11:44 event.app_repeat -- event/event.sh@39 -- # killprocess 1255632 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1255632 ']' 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1255632 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1255632 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1255632' 00:05:47.416 killing process with pid 1255632 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1255632 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1255632 00:05:47.416 spdk_app_start is called in Round 0. 00:05:47.416 Shutdown signal received, stop current app iteration 00:05:47.416 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 reinitialization... 00:05:47.416 spdk_app_start is called in Round 1. 00:05:47.416 Shutdown signal received, stop current app iteration 00:05:47.416 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 reinitialization... 00:05:47.416 spdk_app_start is called in Round 2. 00:05:47.416 Shutdown signal received, stop current app iteration 00:05:47.416 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 reinitialization... 00:05:47.416 spdk_app_start is called in Round 3. 00:05:47.416 Shutdown signal received, stop current app iteration 00:05:47.416 23:11:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:47.416 23:11:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:47.416 00:05:47.416 real 0m18.056s 00:05:47.416 user 0m39.390s 00:05:47.416 sys 0m3.269s 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.416 23:11:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.416 ************************************ 00:05:47.416 END TEST app_repeat 00:05:47.416 ************************************ 00:05:47.416 23:11:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:47.416 23:11:45 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:47.416 23:11:45 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.416 23:11:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.416 23:11:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.416 ************************************ 00:05:47.416 START TEST cpu_locks 00:05:47.416 ************************************ 00:05:47.416 23:11:45 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:47.416 * Looking for test storage... 00:05:47.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:47.416 23:11:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:47.416 23:11:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:47.416 23:11:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:47.416 23:11:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:47.416 23:11:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.416 23:11:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.416 23:11:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.416 ************************************ 00:05:47.416 START TEST default_locks 00:05:47.416 ************************************ 00:05:47.416 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:47.416 23:11:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1257984 00:05:47.416 23:11:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.416 23:11:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1257984 00:05:47.416 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1257984 ']' 00:05:47.416 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.416 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.416 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.416 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.416 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.673 [2024-07-25 23:11:45.170619] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:47.673 [2024-07-25 23:11:45.170705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257984 ] 00:05:47.673 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.673 [2024-07-25 23:11:45.204822] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:47.673 [2024-07-25 23:11:45.234982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.673 [2024-07-25 23:11:45.324949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.931 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.931 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:47.931 23:11:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1257984 00:05:47.931 23:11:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1257984 00:05:47.931 23:11:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.188 lslocks: write error 00:05:48.188 23:11:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1257984 00:05:48.188 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1257984 ']' 00:05:48.188 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1257984 00:05:48.188 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:48.188 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.188 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1257984 00:05:48.445 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.445 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.445 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1257984' 00:05:48.445 killing process with pid 1257984 00:05:48.445 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1257984 00:05:48.445 23:11:45 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1257984 00:05:48.703 23:11:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1257984 00:05:48.703 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:48.703 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1257984 00:05:48.703 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:48.703 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.703 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:48.703 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.703 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1257984 00:05:48.703 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1257984 ']' 00:05:48.703 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.703 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.703 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.703 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.703 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1257984) - No such process 00:05:48.703 ERROR: process (pid: 1257984) is no longer running 00:05:48.703 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.703 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:48.703 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:48.704 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.704 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:48.704 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.704 23:11:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:48.704 23:11:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.704 23:11:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.704 23:11:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.704 00:05:48.704 real 0m1.203s 00:05:48.704 user 0m1.136s 00:05:48.704 sys 0m0.542s 00:05:48.704 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.704 23:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.704 ************************************ 00:05:48.704 END TEST default_locks 00:05:48.704 ************************************ 00:05:48.704 23:11:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:48.704 23:11:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.704 23:11:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.704 23:11:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.704 ************************************ 00:05:48.704 START TEST default_locks_via_rpc 00:05:48.704 ************************************ 00:05:48.704 23:11:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:48.704 23:11:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1258148 00:05:48.704 23:11:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.704 23:11:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1258148 00:05:48.704 23:11:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1258148 ']' 00:05:48.704 23:11:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.704 23:11:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.704 23:11:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.704 23:11:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.704 23:11:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.704 [2024-07-25 23:11:46.420082] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:48.704 [2024-07-25 23:11:46.420172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258148 ] 00:05:48.963 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.963 [2024-07-25 23:11:46.454380] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:48.963 [2024-07-25 23:11:46.480455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.963 [2024-07-25 23:11:46.568680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.221 23:11:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.221 23:11:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:49.221 23:11:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:49.221 23:11:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.221 23:11:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.221 23:11:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.221 23:11:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:49.221 23:11:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.221 23:11:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.221 23:11:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.221 23:11:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:49.221 23:11:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.221 23:11:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.221 23:11:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.221 23:11:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1258148 00:05:49.221 23:11:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1258148 00:05:49.221 23:11:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.478 23:11:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1258148 00:05:49.478 23:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1258148 ']' 00:05:49.479 23:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1258148 00:05:49.479 23:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:49.479 23:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.479 23:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1258148 00:05:49.479 23:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.479 23:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.479 23:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1258148' 00:05:49.479 killing process with pid 1258148 00:05:49.479 23:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1258148 00:05:49.479 23:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1258148 00:05:50.080 00:05:50.080 real 0m1.181s 00:05:50.080 user 0m1.113s 00:05:50.080 sys 0m0.548s 00:05:50.080 23:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.080 23:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.080 ************************************ 00:05:50.080 END TEST default_locks_via_rpc 00:05:50.080 ************************************ 00:05:50.080 23:11:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:50.080 23:11:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.080 23:11:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.080 23:11:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.080 ************************************ 00:05:50.080 START TEST non_locking_app_on_locked_coremask 00:05:50.080 ************************************ 00:05:50.080 23:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:50.080 23:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1258309 00:05:50.080 23:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.080 23:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1258309 /var/tmp/spdk.sock 00:05:50.080 23:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1258309 ']' 00:05:50.080 23:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.080 23:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.080 23:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.080 23:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.080 23:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.080 [2024-07-25 23:11:47.660207] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:50.080 [2024-07-25 23:11:47.660305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258309 ] 00:05:50.080 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.080 [2024-07-25 23:11:47.697667] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:50.080 [2024-07-25 23:11:47.727960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.338 [2024-07-25 23:11:47.824093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.597 23:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.597 23:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:50.597 23:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1258388 00:05:50.597 23:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:50.597 23:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1258388 /var/tmp/spdk2.sock 00:05:50.597 23:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1258388 ']' 00:05:50.597 23:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.597 23:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.597 23:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.597 23:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.597 23:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.597 [2024-07-25 23:11:48.127633] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:50.597 [2024-07-25 23:11:48.127715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258388 ] 00:05:50.597 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.597 [2024-07-25 23:11:48.163789] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:50.597 [2024-07-25 23:11:48.220429] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.597 [2024-07-25 23:11:48.220456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.855 [2024-07-25 23:11:48.398268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.421 23:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.421 23:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:51.421 23:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1258309 00:05:51.421 23:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1258309 00:05:51.421 23:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.986 lslocks: write error 00:05:51.986 23:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1258309 00:05:51.986 23:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1258309 ']' 00:05:51.986 23:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1258309 00:05:51.987 23:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:51.987 23:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.987 23:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1258309 00:05:51.987 23:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.987 23:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.987 23:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1258309' 00:05:51.987 killing process with pid 1258309 00:05:51.987 23:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1258309 00:05:51.987 23:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1258309 00:05:52.920 23:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1258388 00:05:52.920 23:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1258388 ']' 00:05:52.920 23:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1258388 00:05:52.920 23:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:52.920 23:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.920 23:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1258388 00:05:52.920 23:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.920 23:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.920 23:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1258388' 00:05:52.920 killing process with pid 1258388 00:05:52.920 23:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1258388 00:05:52.920 23:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1258388 00:05:53.180 00:05:53.180 real 0m3.204s 00:05:53.180 user 0m3.349s 00:05:53.180 sys 0m1.094s 00:05:53.180 23:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.180 23:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.180 ************************************ 00:05:53.180 END TEST non_locking_app_on_locked_coremask 00:05:53.180 ************************************ 00:05:53.180 23:11:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:53.180 23:11:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.180 23:11:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.180 23:11:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.180 ************************************ 00:05:53.180 START TEST locking_app_on_unlocked_coremask 00:05:53.180 ************************************ 00:05:53.180 23:11:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:53.180 23:11:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1258744 00:05:53.180 23:11:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:53.180 23:11:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1258744 /var/tmp/spdk.sock 00:05:53.180 23:11:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1258744 ']' 00:05:53.180 23:11:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.180 23:11:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.180 23:11:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.180 23:11:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.180 23:11:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.439 [2024-07-25 23:11:50.909669] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:53.439 [2024-07-25 23:11:50.909768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258744 ] 00:05:53.439 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.439 [2024-07-25 23:11:50.940675] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:53.439 [2024-07-25 23:11:50.972204] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.439 [2024-07-25 23:11:50.972234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.439 [2024-07-25 23:11:51.061508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.697 23:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.697 23:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:53.697 23:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1258753 00:05:53.697 23:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.697 23:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1258753 /var/tmp/spdk2.sock 00:05:53.697 23:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1258753 ']' 00:05:53.697 23:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.697 23:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.697 23:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.697 23:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.697 23:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.697 [2024-07-25 23:11:51.369826] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:53.697 [2024-07-25 23:11:51.369925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258753 ] 00:05:53.697 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.697 [2024-07-25 23:11:51.405916] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:53.955 [2024-07-25 23:11:51.470686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.955 [2024-07-25 23:11:51.650694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.888 23:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.888 23:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:54.888 23:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1258753 00:05:54.888 23:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1258753 00:05:54.888 23:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.147 lslocks: write error 00:05:55.147 23:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1258744 00:05:55.147 23:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1258744 ']' 00:05:55.147 23:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1258744 00:05:55.147 23:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:55.147 23:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.147 23:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1258744 00:05:55.147 23:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.147 23:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.147 23:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1258744' 00:05:55.147 killing process with pid 1258744 00:05:55.147 23:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1258744 00:05:55.147 23:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1258744 00:05:56.078 23:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1258753 00:05:56.078 23:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1258753 ']' 00:05:56.078 23:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1258753 00:05:56.078 23:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:56.078 23:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.078 23:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1258753 00:05:56.078 23:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.078 23:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.078 23:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1258753' 00:05:56.078 killing process with pid 1258753 00:05:56.078 23:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1258753 00:05:56.078 23:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1258753 00:05:56.337 00:05:56.337 real 0m3.081s 00:05:56.337 user 0m3.193s 00:05:56.337 sys 0m1.064s 00:05:56.337 23:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.337 23:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.337 ************************************ 00:05:56.337 END TEST locking_app_on_unlocked_coremask 00:05:56.337 ************************************ 00:05:56.337 23:11:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:56.337 23:11:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.337 23:11:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.337 23:11:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.337 ************************************ 00:05:56.337 START TEST locking_app_on_locked_coremask 00:05:56.337 ************************************ 00:05:56.337 23:11:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:56.337 23:11:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1259178 00:05:56.337 23:11:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1259178 /var/tmp/spdk.sock 00:05:56.337 23:11:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.337 23:11:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1259178 ']' 00:05:56.337 23:11:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.337 23:11:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.337 23:11:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.337 23:11:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.337 23:11:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.337 [2024-07-25 23:11:54.039208] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:56.337 [2024-07-25 23:11:54.039304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259178 ] 00:05:56.600 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.600 [2024-07-25 23:11:54.070925] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:56.600 [2024-07-25 23:11:54.102792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.600 [2024-07-25 23:11:54.192047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1259187 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1259187 /var/tmp/spdk2.sock 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1259187 /var/tmp/spdk2.sock 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1259187 /var/tmp/spdk2.sock 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1259187 ']' 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.858 23:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.858 [2024-07-25 23:11:54.500901] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:56.858 [2024-07-25 23:11:54.500997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259187 ] 00:05:56.858 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.858 [2024-07-25 23:11:54.534836] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:57.115 [2024-07-25 23:11:54.598489] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1259178 has claimed it. 00:05:57.115 [2024-07-25 23:11:54.598548] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:57.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1259187) - No such process 00:05:57.681 ERROR: process (pid: 1259187) is no longer running 00:05:57.681 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.681 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:57.681 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:57.681 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.681 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.681 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.681 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1259178 00:05:57.681 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1259178 00:05:57.681 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.939 lslocks: write error 00:05:57.939 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1259178 00:05:57.939 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1259178 ']' 00:05:57.939 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1259178 00:05:57.939 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:58.196 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.196 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1259178 00:05:58.196 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.196 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.196 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1259178' 00:05:58.196 killing process with pid 1259178 00:05:58.196 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1259178 00:05:58.196 23:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1259178 00:05:58.454 00:05:58.454 real 0m2.111s 00:05:58.454 user 0m2.248s 00:05:58.454 sys 0m0.676s 00:05:58.454 23:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.454 23:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.454 ************************************ 00:05:58.454 END TEST locking_app_on_locked_coremask 00:05:58.454 ************************************ 00:05:58.454 23:11:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:58.454 23:11:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.454 23:11:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.454 23:11:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.454 ************************************ 00:05:58.454 START TEST locking_overlapped_coremask 00:05:58.454 ************************************ 00:05:58.454 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:58.454 23:11:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1259478 00:05:58.454 23:11:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:58.454 23:11:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1259478 /var/tmp/spdk.sock 00:05:58.454 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1259478 ']' 00:05:58.454 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.454 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.454 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.454 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.454 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.714 [2024-07-25 23:11:56.199948] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:58.714 [2024-07-25 23:11:56.200024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259478 ] 00:05:58.714 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.714 [2024-07-25 23:11:56.232423] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:58.714 [2024-07-25 23:11:56.258465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.714 [2024-07-25 23:11:56.348102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.714 [2024-07-25 23:11:56.348159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.714 [2024-07-25 23:11:56.348163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1259487 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1259487 /var/tmp/spdk2.sock 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1259487 /var/tmp/spdk2.sock 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1259487 /var/tmp/spdk2.sock 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1259487 ']' 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.972 23:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.972 [2024-07-25 23:11:56.653226] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:05:58.972 [2024-07-25 23:11:56.653321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259487 ] 00:05:58.972 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.972 [2024-07-25 23:11:56.688130] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:59.232 [2024-07-25 23:11:56.742690] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1259478 has claimed it. 00:05:59.232 [2024-07-25 23:11:56.742738] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:59.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1259487) - No such process 00:05:59.802 ERROR: process (pid: 1259487) is no longer running 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1259478 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1259478 ']' 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1259478 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1259478 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1259478' 00:05:59.802 killing process with pid 1259478 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1259478 00:05:59.802 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1259478 00:06:00.061 00:06:00.061 real 0m1.621s 00:06:00.061 user 0m4.361s 00:06:00.061 sys 0m0.463s 00:06:00.061 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.061 23:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.061 ************************************ 00:06:00.061 END TEST locking_overlapped_coremask 00:06:00.061 ************************************ 00:06:00.321 23:11:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:00.321 23:11:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.321 23:11:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.321 23:11:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.321 ************************************ 00:06:00.321 START TEST locking_overlapped_coremask_via_rpc 00:06:00.321 ************************************ 00:06:00.321 23:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:00.321 23:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1259651 00:06:00.321 23:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:00.322 23:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1259651 /var/tmp/spdk.sock 00:06:00.322 23:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1259651 ']' 00:06:00.322 23:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.322 23:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.322 23:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.322 23:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.322 23:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.322 [2024-07-25 23:11:57.874838] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:00.322 [2024-07-25 23:11:57.874922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259651 ] 00:06:00.322 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.322 [2024-07-25 23:11:57.907209] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.322 [2024-07-25 23:11:57.932583] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.322 [2024-07-25 23:11:57.932606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.322 [2024-07-25 23:11:58.022759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.322 [2024-07-25 23:11:58.022824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.322 [2024-07-25 23:11:58.022826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.581 23:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.581 23:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:00.581 23:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1259777 00:06:00.581 23:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1259777 /var/tmp/spdk2.sock 00:06:00.581 23:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1259777 ']' 00:06:00.581 23:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:00.581 23:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.581 23:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.581 23:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.581 23:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.581 23:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.842 [2024-07-25 23:11:58.327315] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:00.842 [2024-07-25 23:11:58.327420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259777 ] 00:06:00.842 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.842 [2024-07-25 23:11:58.362092] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.842 [2024-07-25 23:11:58.417301] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.842 [2024-07-25 23:11:58.417328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.102 [2024-07-25 23:11:58.593249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.102 [2024-07-25 23:11:58.593308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.102 [2024-07-25 23:11:58.593305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.669 [2024-07-25 23:11:59.299160] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1259651 has claimed it. 00:06:01.669 request: 00:06:01.669 { 00:06:01.669 "method": "framework_enable_cpumask_locks", 00:06:01.669 "req_id": 1 00:06:01.669 } 00:06:01.669 Got JSON-RPC error response 00:06:01.669 response: 00:06:01.669 { 00:06:01.669 "code": -32603, 00:06:01.669 "message": "Failed to claim CPU core: 2" 00:06:01.669 } 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1259651 /var/tmp/spdk.sock 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1259651 ']' 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.669 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.928 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.928 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:01.928 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1259777 /var/tmp/spdk2.sock 00:06:01.928 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1259777 ']' 00:06:01.928 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.928 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.928 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.928 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.928 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.188 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.188 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:02.188 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:02.188 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:02.188 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:02.188 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:02.188 00:06:02.188 real 0m1.985s 00:06:02.188 user 0m1.009s 00:06:02.188 sys 0m0.204s 00:06:02.188 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.188 23:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.188 ************************************ 00:06:02.188 END TEST locking_overlapped_coremask_via_rpc 00:06:02.188 ************************************ 00:06:02.188 23:11:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:02.188 23:11:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1259651 ]] 00:06:02.188 23:11:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1259651 00:06:02.188 23:11:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1259651 ']' 00:06:02.188 23:11:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1259651 00:06:02.188 23:11:59 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:02.188 23:11:59 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.188 23:11:59 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1259651 00:06:02.188 23:11:59 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.188 23:11:59 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.188 23:11:59 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1259651' 00:06:02.188 killing process with pid 1259651 00:06:02.188 23:11:59 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1259651 00:06:02.188 23:11:59 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1259651 00:06:02.756 23:12:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1259777 ]] 00:06:02.756 23:12:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1259777 00:06:02.756 23:12:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1259777 ']' 00:06:02.756 23:12:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1259777 00:06:02.756 23:12:00 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:02.756 23:12:00 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.756 23:12:00 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1259777 00:06:02.756 23:12:00 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:02.756 23:12:00 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:02.756 23:12:00 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1259777' 00:06:02.756 killing process with pid 1259777 00:06:02.756 23:12:00 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1259777 00:06:02.756 23:12:00 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1259777 00:06:03.014 23:12:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:03.014 23:12:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:03.014 23:12:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1259651 ]] 00:06:03.014 23:12:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1259651 00:06:03.014 23:12:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1259651 ']' 00:06:03.014 23:12:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1259651 00:06:03.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1259651) - No such process 00:06:03.014 23:12:00 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1259651 is not found' 00:06:03.014 Process with pid 1259651 is not found 00:06:03.014 23:12:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1259777 ]] 00:06:03.014 23:12:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1259777 00:06:03.014 23:12:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1259777 ']' 00:06:03.014 23:12:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1259777 00:06:03.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1259777) - No such process 00:06:03.014 23:12:00 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1259777 is not found' 00:06:03.014 Process with pid 1259777 is not found 00:06:03.014 23:12:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:03.014 00:06:03.014 real 0m15.630s 00:06:03.014 user 0m27.197s 00:06:03.014 sys 0m5.488s 00:06:03.014 23:12:00 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.014 23:12:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.014 ************************************ 00:06:03.014 END TEST cpu_locks 00:06:03.014 ************************************ 00:06:03.014 00:06:03.014 real 0m39.479s 00:06:03.014 user 1m15.377s 00:06:03.014 sys 0m9.559s 00:06:03.014 23:12:00 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.014 23:12:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.015 ************************************ 00:06:03.015 END TEST event 00:06:03.015 ************************************ 00:06:03.015 23:12:00 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:03.015 23:12:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.015 23:12:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.015 23:12:00 -- common/autotest_common.sh@10 -- # set +x 00:06:03.015 ************************************ 00:06:03.015 START TEST thread 00:06:03.015 ************************************ 00:06:03.015 23:12:00 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:03.274 * Looking for test storage... 00:06:03.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:03.274 23:12:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:03.274 23:12:00 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:03.274 23:12:00 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.274 23:12:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.274 ************************************ 00:06:03.274 START TEST thread_poller_perf 00:06:03.274 ************************************ 00:06:03.274 23:12:00 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:03.274 [2024-07-25 23:12:00.816373] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:03.274 [2024-07-25 23:12:00.816465] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260201 ] 00:06:03.274 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.274 [2024-07-25 23:12:00.849642] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.274 [2024-07-25 23:12:00.877946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.274 [2024-07-25 23:12:00.971372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.274 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:04.654 ====================================== 00:06:04.654 busy:2707238746 (cyc) 00:06:04.654 total_run_count: 299000 00:06:04.654 tsc_hz: 2700000000 (cyc) 00:06:04.654 ====================================== 00:06:04.654 poller_cost: 9054 (cyc), 3353 (nsec) 00:06:04.654 00:06:04.654 real 0m1.258s 00:06:04.654 user 0m1.175s 00:06:04.654 sys 0m0.077s 00:06:04.654 23:12:02 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.654 23:12:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.654 ************************************ 00:06:04.654 END TEST thread_poller_perf 00:06:04.654 ************************************ 00:06:04.654 23:12:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.654 23:12:02 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:04.654 23:12:02 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.654 23:12:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.654 ************************************ 00:06:04.654 START TEST thread_poller_perf 00:06:04.654 ************************************ 00:06:04.654 23:12:02 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.654 [2024-07-25 23:12:02.119889] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:04.654 [2024-07-25 23:12:02.119955] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260415 ] 00:06:04.654 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.654 [2024-07-25 23:12:02.155152] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.654 [2024-07-25 23:12:02.185078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.654 [2024-07-25 23:12:02.278583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.655 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:06.032 ====================================== 00:06:06.032 busy:2702582777 (cyc) 00:06:06.032 total_run_count: 3888000 00:06:06.032 tsc_hz: 2700000000 (cyc) 00:06:06.032 ====================================== 00:06:06.032 poller_cost: 695 (cyc), 257 (nsec) 00:06:06.032 00:06:06.032 real 0m1.254s 00:06:06.032 user 0m1.161s 00:06:06.032 sys 0m0.086s 00:06:06.032 23:12:03 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.032 23:12:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.032 ************************************ 00:06:06.032 END TEST thread_poller_perf 00:06:06.032 ************************************ 00:06:06.032 23:12:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:06.032 00:06:06.032 real 0m2.652s 00:06:06.032 user 0m2.393s 00:06:06.032 sys 0m0.257s 00:06:06.032 23:12:03 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.032 23:12:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.032 ************************************ 00:06:06.032 END TEST thread 00:06:06.032 ************************************ 00:06:06.032 23:12:03 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:06.032 23:12:03 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:06.033 23:12:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.033 23:12:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.033 23:12:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.033 ************************************ 00:06:06.033 START TEST app_cmdline 00:06:06.033 ************************************ 00:06:06.033 23:12:03 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:06.033 * Looking for test storage... 00:06:06.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:06.033 23:12:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:06.033 23:12:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1260612 00:06:06.033 23:12:03 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:06.033 23:12:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1260612 00:06:06.033 23:12:03 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1260612 ']' 00:06:06.033 23:12:03 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.033 23:12:03 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.033 23:12:03 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.033 23:12:03 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.033 23:12:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.033 [2024-07-25 23:12:03.529516] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:06.033 [2024-07-25 23:12:03.529612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260612 ] 00:06:06.033 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.033 [2024-07-25 23:12:03.561797] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.033 [2024-07-25 23:12:03.588417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.033 [2024-07-25 23:12:03.672504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.324 23:12:03 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.324 23:12:03 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:06.324 23:12:03 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:06.582 { 00:06:06.582 "version": "SPDK v24.09-pre git sha1 704257090", 00:06:06.582 "fields": { 00:06:06.582 "major": 24, 00:06:06.582 "minor": 9, 00:06:06.582 "patch": 0, 00:06:06.582 "suffix": "-pre", 00:06:06.582 "commit": "704257090" 00:06:06.582 } 00:06:06.582 } 00:06:06.582 23:12:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:06.582 23:12:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:06.582 23:12:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:06.582 23:12:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:06.582 23:12:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:06.582 23:12:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:06.582 23:12:04 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.582 23:12:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.582 23:12:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:06.582 23:12:04 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.582 23:12:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:06.582 23:12:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:06.582 23:12:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:06.582 23:12:04 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:06.582 23:12:04 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:06.582 23:12:04 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:06.582 23:12:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.582 23:12:04 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:06.582 23:12:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.582 23:12:04 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:06.582 23:12:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.582 23:12:04 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:06.582 23:12:04 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:06.582 23:12:04 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:06.843 request: 00:06:06.843 { 00:06:06.843 "method": "env_dpdk_get_mem_stats", 00:06:06.843 "req_id": 1 00:06:06.843 } 00:06:06.843 Got JSON-RPC error response 00:06:06.843 response: 00:06:06.843 { 00:06:06.843 "code": -32601, 00:06:06.843 "message": "Method not found" 00:06:06.843 } 00:06:06.843 23:12:04 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:06.843 23:12:04 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.843 23:12:04 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:06.843 23:12:04 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.843 23:12:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1260612 00:06:06.843 23:12:04 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1260612 ']' 00:06:06.843 23:12:04 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1260612 00:06:06.843 23:12:04 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:06.843 23:12:04 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.843 23:12:04 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1260612 00:06:06.843 23:12:04 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.843 23:12:04 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.843 23:12:04 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1260612' 00:06:06.843 killing process with pid 1260612 00:06:06.843 23:12:04 app_cmdline -- common/autotest_common.sh@969 -- # kill 1260612 00:06:06.843 23:12:04 app_cmdline -- common/autotest_common.sh@974 -- # wait 1260612 00:06:07.412 00:06:07.412 real 0m1.517s 00:06:07.412 user 0m1.854s 00:06:07.412 sys 0m0.458s 00:06:07.412 23:12:04 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.412 23:12:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.412 ************************************ 00:06:07.412 END TEST app_cmdline 00:06:07.412 ************************************ 00:06:07.412 23:12:04 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:07.413 23:12:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.413 23:12:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.413 23:12:04 -- common/autotest_common.sh@10 -- # set +x 00:06:07.413 ************************************ 00:06:07.413 START TEST version 00:06:07.413 ************************************ 00:06:07.413 23:12:04 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:07.413 * Looking for test storage... 00:06:07.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:07.413 23:12:05 version -- app/version.sh@17 -- # get_header_version major 00:06:07.413 23:12:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:07.413 23:12:05 version -- app/version.sh@14 -- # cut -f2 00:06:07.413 23:12:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.413 23:12:05 version -- app/version.sh@17 -- # major=24 00:06:07.413 23:12:05 version -- app/version.sh@18 -- # get_header_version minor 00:06:07.413 23:12:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:07.413 23:12:05 version -- app/version.sh@14 -- # cut -f2 00:06:07.413 23:12:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.413 23:12:05 version -- app/version.sh@18 -- # minor=9 00:06:07.413 23:12:05 version -- app/version.sh@19 -- # get_header_version patch 00:06:07.413 23:12:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:07.413 23:12:05 version -- app/version.sh@14 -- # cut -f2 00:06:07.413 23:12:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.413 23:12:05 version -- app/version.sh@19 -- # patch=0 00:06:07.413 23:12:05 version -- app/version.sh@20 -- # get_header_version suffix 00:06:07.413 23:12:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:07.413 23:12:05 version -- app/version.sh@14 -- # cut -f2 00:06:07.413 23:12:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.413 23:12:05 version -- app/version.sh@20 -- # suffix=-pre 00:06:07.413 23:12:05 version -- app/version.sh@22 -- # version=24.9 00:06:07.413 23:12:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:07.413 23:12:05 version -- app/version.sh@28 -- # version=24.9rc0 00:06:07.413 23:12:05 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:07.413 23:12:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:07.413 23:12:05 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:07.413 23:12:05 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:07.413 00:06:07.413 real 0m0.110s 00:06:07.413 user 0m0.062s 00:06:07.413 sys 0m0.069s 00:06:07.413 23:12:05 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.413 23:12:05 version -- common/autotest_common.sh@10 -- # set +x 00:06:07.413 ************************************ 00:06:07.413 END TEST version 00:06:07.413 ************************************ 00:06:07.413 23:12:05 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:07.413 23:12:05 -- spdk/autotest.sh@202 -- # uname -s 00:06:07.413 23:12:05 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:07.413 23:12:05 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:07.413 23:12:05 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:07.413 23:12:05 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:06:07.413 23:12:05 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:07.413 23:12:05 -- spdk/autotest.sh@264 -- # timing_exit lib 00:06:07.413 23:12:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:07.413 23:12:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.672 23:12:05 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:07.672 23:12:05 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:06:07.672 23:12:05 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:06:07.672 23:12:05 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:06:07.672 23:12:05 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:06:07.672 23:12:05 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:06:07.672 23:12:05 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:07.672 23:12:05 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:07.672 23:12:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.672 23:12:05 -- common/autotest_common.sh@10 -- # set +x 00:06:07.672 ************************************ 00:06:07.672 START TEST nvmf_tcp 00:06:07.672 ************************************ 00:06:07.672 23:12:05 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:07.672 * Looking for test storage... 00:06:07.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:07.672 23:12:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:07.672 23:12:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:07.672 23:12:05 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:07.672 23:12:05 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:07.672 23:12:05 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.672 23:12:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.672 ************************************ 00:06:07.672 START TEST nvmf_target_core 00:06:07.672 ************************************ 00:06:07.672 23:12:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:07.672 * Looking for test storage... 00:06:07.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:07.672 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:07.672 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:07.672 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:07.672 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:07.672 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.672 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.672 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.672 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.672 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.672 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:07.673 ************************************ 00:06:07.673 START TEST nvmf_abort 00:06:07.673 ************************************ 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:07.673 * Looking for test storage... 00:06:07.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:07.673 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:07.674 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:07.674 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:07.674 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:07.674 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:07.674 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:07.674 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:07.674 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.674 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.674 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.674 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:07.674 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:07.674 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:07.674 23:12:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:10.214 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:10.215 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:10.215 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:10.215 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:10.215 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:10.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:10.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:06:10.215 00:06:10.215 --- 10.0.0.2 ping statistics --- 00:06:10.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:10.215 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:10.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:10.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:06:10.215 00:06:10.215 --- 10.0.0.1 ping statistics --- 00:06:10.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:10.215 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1262808 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1262808 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1262808 ']' 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.215 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.215 [2024-07-25 23:12:07.554310] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:10.215 [2024-07-25 23:12:07.554404] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:10.216 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.216 [2024-07-25 23:12:07.592819] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:10.216 [2024-07-25 23:12:07.618770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.216 [2024-07-25 23:12:07.714699] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:10.216 [2024-07-25 23:12:07.714748] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:10.216 [2024-07-25 23:12:07.714775] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:10.216 [2024-07-25 23:12:07.714786] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:10.216 [2024-07-25 23:12:07.714795] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:10.216 [2024-07-25 23:12:07.716079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.216 [2024-07-25 23:12:07.716123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.216 [2024-07-25 23:12:07.716128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.216 [2024-07-25 23:12:07.854610] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.216 Malloc0 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.216 Delay0 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.216 [2024-07-25 23:12:07.932029] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.216 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:10.476 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.476 23:12:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:10.476 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.476 [2024-07-25 23:12:08.037007] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:12.379 Initializing NVMe Controllers 00:06:12.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:12.379 controller IO queue size 128 less than required 00:06:12.379 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:12.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:12.379 Initialization complete. Launching workers. 00:06:12.379 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31932 00:06:12.379 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31993, failed to submit 62 00:06:12.379 success 31936, unsuccess 57, failed 0 00:06:12.379 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:12.379 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.379 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.379 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.379 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:12.379 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:12.379 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:12.379 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:12.379 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:12.379 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:12.379 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:12.379 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:12.379 rmmod nvme_tcp 00:06:12.638 rmmod nvme_fabrics 00:06:12.638 rmmod nvme_keyring 00:06:12.638 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:12.638 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:12.638 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:12.638 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1262808 ']' 00:06:12.638 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1262808 00:06:12.638 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1262808 ']' 00:06:12.638 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1262808 00:06:12.638 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:12.638 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.638 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1262808 00:06:12.638 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:12.638 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:12.638 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1262808' 00:06:12.638 killing process with pid 1262808 00:06:12.638 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1262808 00:06:12.638 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1262808 00:06:12.897 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:12.897 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:12.897 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:12.897 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:12.897 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:12.897 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.897 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:12.897 23:12:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:14.806 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:14.806 00:06:14.806 real 0m7.148s 00:06:14.806 user 0m9.991s 00:06:14.806 sys 0m2.624s 00:06:14.806 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.806 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:14.806 ************************************ 00:06:14.806 END TEST nvmf_abort 00:06:14.806 ************************************ 00:06:14.806 23:12:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:14.806 23:12:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:14.806 23:12:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.806 23:12:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:14.806 ************************************ 00:06:14.806 START TEST nvmf_ns_hotplug_stress 00:06:14.806 ************************************ 00:06:14.806 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:15.066 * Looking for test storage... 00:06:15.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.066 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:15.067 23:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:16.970 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:16.970 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.970 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:16.971 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:16.971 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:16.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:16.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:06:16.971 00:06:16.971 --- 10.0.0.2 ping statistics --- 00:06:16.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.971 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:16.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:16.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:06:16.971 00:06:16.971 --- 10.0.0.1 ping statistics --- 00:06:16.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.971 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1265385 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1265385 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1265385 ']' 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.971 23:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:17.231 [2024-07-25 23:12:14.732654] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:06:17.231 [2024-07-25 23:12:14.732755] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.231 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.231 [2024-07-25 23:12:14.772264] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:17.231 [2024-07-25 23:12:14.805151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.231 [2024-07-25 23:12:14.896220] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:17.231 [2024-07-25 23:12:14.896278] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:17.231 [2024-07-25 23:12:14.896296] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.231 [2024-07-25 23:12:14.896310] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.231 [2024-07-25 23:12:14.896322] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:17.231 [2024-07-25 23:12:14.896425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.231 [2024-07-25 23:12:14.896522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.231 [2024-07-25 23:12:14.896524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.489 23:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.489 23:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:17.489 23:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:17.489 23:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:17.489 23:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:17.489 23:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:17.489 23:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:17.489 23:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:17.745 [2024-07-25 23:12:15.282087] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.745 23:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:18.002 23:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:18.260 [2024-07-25 23:12:15.781574] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:18.260 23:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:18.517 23:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:18.775 Malloc0 00:06:18.775 23:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:19.033 Delay0 00:06:19.033 23:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.290 23:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:19.547 NULL1 00:06:19.547 23:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:19.805 23:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:19.805 23:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1265684 00:06:19.805 23:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:19.805 23:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.805 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.062 23:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.320 23:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:20.320 23:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:20.320 true 00:06:20.320 23:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:20.320 23:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.578 23:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.836 23:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:20.836 23:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:21.093 true 00:06:21.093 23:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:21.093 23:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.031 Read completed with error (sct=0, sc=11) 00:06:22.031 23:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.289 23:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:22.289 23:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:22.546 true 00:06:22.546 23:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:22.546 23:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.804 23:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.063 23:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:23.063 23:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:23.321 true 00:06:23.321 23:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:23.321 23:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.293 23:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.551 23:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:24.551 23:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:24.809 true 00:06:24.809 23:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:24.809 23:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.067 23:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.326 23:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:25.327 23:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:25.327 true 00:06:25.587 23:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:25.587 23:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.155 23:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.674 23:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:26.674 23:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:26.674 true 00:06:26.674 23:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:26.674 23:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.931 23:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.189 23:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:27.189 23:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:27.448 true 00:06:27.448 23:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:27.448 23:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.384 23:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.642 23:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:28.642 23:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:28.899 true 00:06:28.899 23:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:28.899 23:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.157 23:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.415 23:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:29.415 23:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:29.673 true 00:06:29.673 23:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:29.673 23:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.611 23:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.611 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.869 23:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:30.869 23:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:31.126 true 00:06:31.126 23:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:31.126 23:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.384 23:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.643 23:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:31.643 23:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:31.643 true 00:06:31.902 23:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:31.902 23:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.840 23:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.840 23:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:32.840 23:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:33.098 true 00:06:33.098 23:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:33.098 23:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.355 23:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.613 23:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:33.613 23:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:33.871 true 00:06:33.871 23:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:33.871 23:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.807 23:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.065 23:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:35.065 23:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:35.324 true 00:06:35.324 23:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:35.324 23:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.582 23:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.839 23:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:35.839 23:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:36.096 true 00:06:36.096 23:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:36.096 23:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.030 23:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.288 23:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:37.288 23:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:37.546 true 00:06:37.546 23:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:37.546 23:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.804 23:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.061 23:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:38.061 23:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:38.319 true 00:06:38.319 23:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:38.319 23:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.255 23:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.255 23:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:39.255 23:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:39.512 true 00:06:39.512 23:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:39.512 23:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.771 23:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.067 23:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:40.067 23:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:40.325 true 00:06:40.325 23:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:40.325 23:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.262 23:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.520 23:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:41.520 23:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:41.777 true 00:06:41.777 23:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:41.777 23:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.035 23:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.292 23:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:42.293 23:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:42.550 true 00:06:42.550 23:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:42.550 23:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.487 23:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.743 23:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:43.743 23:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:44.000 true 00:06:44.000 23:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:44.000 23:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.257 23:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.514 23:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:44.515 23:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:44.773 true 00:06:44.773 23:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:44.773 23:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.708 23:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.964 23:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:45.964 23:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:46.222 true 00:06:46.222 23:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:46.222 23:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.222 23:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.787 23:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:46.787 23:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:46.787 true 00:06:46.787 23:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:46.787 23:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.724 23:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.982 23:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:47.982 23:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:48.241 true 00:06:48.241 23:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:48.241 23:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.500 23:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.500 23:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:48.500 23:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:48.758 true 00:06:48.758 23:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:48.758 23:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.693 23:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.951 23:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:49.951 23:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:49.951 Initializing NVMe Controllers 00:06:49.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:49.951 Controller IO queue size 128, less than required. 00:06:49.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:49.951 Controller IO queue size 128, less than required. 00:06:49.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:49.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:49.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:49.951 Initialization complete. Launching workers. 00:06:49.951 ======================================================== 00:06:49.951 Latency(us) 00:06:49.951 Device Information : IOPS MiB/s Average min max 00:06:49.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 766.20 0.37 87644.80 3118.14 1148414.71 00:06:49.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11016.17 5.38 11584.34 2325.33 452131.75 00:06:49.951 ======================================================== 00:06:49.951 Total : 11782.38 5.75 16530.53 2325.33 1148414.71 00:06:49.951 00:06:50.209 true 00:06:50.209 23:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1265684 00:06:50.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1265684) - No such process 00:06:50.209 23:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1265684 00:06:50.209 23:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.467 23:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.723 23:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:50.723 23:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:50.723 23:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:50.723 23:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.723 23:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:50.980 null0 00:06:50.980 23:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.980 23:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.980 23:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:51.238 null1 00:06:51.238 23:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:51.238 23:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:51.238 23:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:51.497 null2 00:06:51.497 23:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:51.497 23:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:51.497 23:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:51.497 null3 00:06:51.754 23:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:51.754 23:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:51.754 23:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:51.754 null4 00:06:51.754 23:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:51.754 23:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:51.754 23:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:52.012 null5 00:06:52.012 23:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:52.012 23:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:52.012 23:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:52.269 null6 00:06:52.269 23:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:52.269 23:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:52.269 23:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:52.526 null7 00:06:52.526 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:52.526 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1269767 1269768 1269770 1269772 1269774 1269776 1269778 1269780 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.527 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.785 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.785 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.785 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.043 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.043 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.043 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.043 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.043 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.300 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.301 23:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.558 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.558 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.558 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.558 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.558 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.558 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.558 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.558 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.816 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.073 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.073 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.073 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.073 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.073 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.073 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:54.073 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.073 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.331 23:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.589 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.589 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.589 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.589 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.589 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.589 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.589 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:54.589 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.846 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:55.104 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.104 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:55.104 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.104 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.104 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:55.104 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:55.104 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:55.104 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:55.361 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.361 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.361 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.361 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.361 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.361 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.361 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.361 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.361 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.361 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.362 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.362 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.362 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.362 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.362 23:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.362 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.362 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.362 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.362 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.362 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.362 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.362 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.362 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.362 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:55.650 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.650 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.650 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:55.650 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:55.650 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.650 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:55.650 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:55.650 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.908 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.166 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.166 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.166 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.166 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.166 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.166 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.166 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.166 23:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.424 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.682 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.682 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.682 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.682 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.682 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.682 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.682 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.683 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.942 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.201 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.201 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.201 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.201 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.201 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.459 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.459 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.459 23:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.459 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.459 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.459 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.717 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.717 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.717 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.717 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.717 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.717 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.717 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.717 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.718 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.718 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.718 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.718 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.718 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.718 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.718 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.718 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.718 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.718 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.718 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.718 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.718 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.976 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.976 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.976 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.976 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.976 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.976 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.976 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.976 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:58.234 rmmod nvme_tcp 00:06:58.234 rmmod nvme_fabrics 00:06:58.234 rmmod nvme_keyring 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1265385 ']' 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1265385 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1265385 ']' 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1265385 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1265385 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1265385' 00:06:58.234 killing process with pid 1265385 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1265385 00:06:58.234 23:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1265385 00:06:58.493 23:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:58.493 23:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:58.493 23:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:58.493 23:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:58.493 23:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:58.493 23:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.493 23:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.493 23:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.028 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:01.028 00:07:01.028 real 0m45.612s 00:07:01.028 user 3m28.273s 00:07:01.028 sys 0m16.389s 00:07:01.028 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.028 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:01.028 ************************************ 00:07:01.028 END TEST nvmf_ns_hotplug_stress 00:07:01.028 ************************************ 00:07:01.028 23:12:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:01.028 23:12:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:01.028 23:12:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.028 23:12:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:01.028 ************************************ 00:07:01.028 START TEST nvmf_delete_subsystem 00:07:01.028 ************************************ 00:07:01.028 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:01.028 * Looking for test storage... 00:07:01.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:01.029 23:12:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:02.933 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:02.933 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:02.933 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:02.933 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:02.933 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:02.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:07:02.934 00:07:02.934 --- 10.0.0.2 ping statistics --- 00:07:02.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.934 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:02.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:07:02.934 00:07:02.934 --- 10.0.0.1 ping statistics --- 00:07:02.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.934 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1272527 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1272527 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1272527 ']' 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.934 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.934 [2024-07-25 23:13:00.490656] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:02.934 [2024-07-25 23:13:00.490723] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.934 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.934 [2024-07-25 23:13:00.530196] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:02.934 [2024-07-25 23:13:00.558421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.934 [2024-07-25 23:13:00.651904] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.934 [2024-07-25 23:13:00.651958] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.934 [2024-07-25 23:13:00.651987] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.934 [2024-07-25 23:13:00.651999] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.934 [2024-07-25 23:13:00.652009] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.934 [2024-07-25 23:13:00.652144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.934 [2024-07-25 23:13:00.652150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.193 [2024-07-25 23:13:00.794770] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.193 [2024-07-25 23:13:00.811017] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.193 NULL1 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.193 Delay0 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1272647 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:03.193 23:13:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:03.193 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.193 [2024-07-25 23:13:00.895711] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:05.720 23:13:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:05.720 23:13:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.720 23:13:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 starting I/O failed: -6 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 starting I/O failed: -6 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 starting I/O failed: -6 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 starting I/O failed: -6 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 starting I/O failed: -6 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 starting I/O failed: -6 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 starting I/O failed: -6 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 starting I/O failed: -6 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 starting I/O failed: -6 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 starting I/O failed: -6 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 starting I/O failed: -6 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 starting I/O failed: -6 00:07:05.720 [2024-07-25 23:13:02.985476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7100 is same with the state(5) to be set 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 starting I/O failed: -6 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 starting I/O failed: -6 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 starting I/O failed: -6 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Write completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.720 starting I/O failed: -6 00:07:05.720 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 starting I/O failed: -6 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 starting I/O failed: -6 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 starting I/O failed: -6 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 starting I/O failed: -6 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 starting I/O failed: -6 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 starting I/O failed: -6 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 starting I/O failed: -6 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 starting I/O failed: -6 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 [2024-07-25 23:13:02.986360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3bd4000c00 is same with the state(5) to be set 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Write completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:05.721 Read completed with error (sct=0, sc=8) 00:07:06.286 [2024-07-25 23:13:03.950991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee4b40 is same with the state(5) to be set 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 [2024-07-25 23:13:03.987002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3bd400d7a0 is same with the state(5) to be set 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 [2024-07-25 23:13:03.987768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd300 is same with the state(5) to be set 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Read completed with error (sct=0, sc=8) 00:07:06.286 Write completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Write completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Write completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Write completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Write completed with error (sct=0, sc=8) 00:07:06.287 [2024-07-25 23:13:03.987972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec6f20 is same with the state(5) to be set 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Write completed with error (sct=0, sc=8) 00:07:06.287 Write completed with error (sct=0, sc=8) 00:07:06.287 Write completed with error (sct=0, sc=8) 00:07:06.287 Write completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Write completed with error (sct=0, sc=8) 00:07:06.287 Write completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Write completed with error (sct=0, sc=8) 00:07:06.287 Write completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Write completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Read completed with error (sct=0, sc=8) 00:07:06.287 Write completed with error (sct=0, sc=8) 00:07:06.287 [2024-07-25 23:13:03.988162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3bd400d000 is same with the state(5) to be set 00:07:06.287 Initializing NVMe Controllers 00:07:06.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:06.287 Controller IO queue size 128, less than required. 00:07:06.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:06.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:06.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:06.287 Initialization complete. Launching workers. 00:07:06.287 ======================================================== 00:07:06.287 Latency(us) 00:07:06.287 Device Information : IOPS MiB/s Average min max 00:07:06.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.64 0.09 881963.52 735.49 1013358.47 00:07:06.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.64 0.09 882057.81 408.96 1013443.15 00:07:06.287 ======================================================== 00:07:06.287 Total : 353.28 0.17 882010.67 408.96 1013443.15 00:07:06.287 00:07:06.287 [2024-07-25 23:13:03.989024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee4b40 (9): Bad file descriptor 00:07:06.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:06.287 23:13:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.287 23:13:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:06.287 23:13:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1272647 00:07:06.287 23:13:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:06.852 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:06.852 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1272647 00:07:06.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1272647) - No such process 00:07:06.852 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1272647 00:07:06.852 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1272647 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1272647 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.853 [2024-07-25 23:13:04.513238] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1273079 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1273079 00:07:06.853 23:13:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.853 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.110 [2024-07-25 23:13:04.578274] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:07.368 23:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:07.368 23:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1273079 00:07:07.368 23:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:07.933 23:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:07.933 23:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1273079 00:07:07.933 23:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.497 23:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.497 23:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1273079 00:07:08.497 23:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:09.062 23:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.062 23:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1273079 00:07:09.062 23:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:09.319 23:13:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.319 23:13:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1273079 00:07:09.319 23:13:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:09.885 23:13:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.885 23:13:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1273079 00:07:09.885 23:13:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:10.142 Initializing NVMe Controllers 00:07:10.142 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:10.142 Controller IO queue size 128, less than required. 00:07:10.142 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:10.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:10.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:10.142 Initialization complete. Launching workers. 00:07:10.142 ======================================================== 00:07:10.142 Latency(us) 00:07:10.142 Device Information : IOPS MiB/s Average min max 00:07:10.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006005.24 1000181.24 1043425.35 00:07:10.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004541.16 1000285.96 1042755.17 00:07:10.143 ======================================================== 00:07:10.143 Total : 256.00 0.12 1005273.20 1000181.24 1043425.35 00:07:10.143 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1273079 00:07:10.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1273079) - No such process 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1273079 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:10.400 rmmod nvme_tcp 00:07:10.400 rmmod nvme_fabrics 00:07:10.400 rmmod nvme_keyring 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1272527 ']' 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1272527 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1272527 ']' 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1272527 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.400 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1272527 00:07:10.659 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.659 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.659 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1272527' 00:07:10.659 killing process with pid 1272527 00:07:10.659 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1272527 00:07:10.659 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1272527 00:07:10.659 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:10.659 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:10.659 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:10.659 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:10.659 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:10.659 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.659 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.659 23:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:13.193 00:07:13.193 real 0m12.240s 00:07:13.193 user 0m27.713s 00:07:13.193 sys 0m2.900s 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.193 ************************************ 00:07:13.193 END TEST nvmf_delete_subsystem 00:07:13.193 ************************************ 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.193 ************************************ 00:07:13.193 START TEST nvmf_host_management 00:07:13.193 ************************************ 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:13.193 * Looking for test storage... 00:07:13.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.193 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:13.194 23:13:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:15.121 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:15.121 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:15.121 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:15.121 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.121 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:15.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:07:15.122 00:07:15.122 --- 10.0.0.2 ping statistics --- 00:07:15.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.122 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:07:15.122 00:07:15.122 --- 10.0.0.1 ping statistics --- 00:07:15.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.122 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1275421 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1275421 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1275421 ']' 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.122 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.122 [2024-07-25 23:13:12.644787] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:15.122 [2024-07-25 23:13:12.644882] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.122 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.122 [2024-07-25 23:13:12.683842] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:15.122 [2024-07-25 23:13:12.710773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.122 [2024-07-25 23:13:12.800952] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.122 [2024-07-25 23:13:12.801012] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.122 [2024-07-25 23:13:12.801040] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.122 [2024-07-25 23:13:12.801051] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.122 [2024-07-25 23:13:12.801069] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.122 [2024-07-25 23:13:12.801156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.122 [2024-07-25 23:13:12.801221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.122 [2024-07-25 23:13:12.801271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:15.122 [2024-07-25 23:13:12.801273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.399 [2024-07-25 23:13:12.950448] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.399 23:13:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.399 Malloc0 00:07:15.399 [2024-07-25 23:13:13.009539] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1275474 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1275474 /var/tmp/bdevperf.sock 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1275474 ']' 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:15.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:15.399 { 00:07:15.399 "params": { 00:07:15.399 "name": "Nvme$subsystem", 00:07:15.399 "trtype": "$TEST_TRANSPORT", 00:07:15.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:15.399 "adrfam": "ipv4", 00:07:15.399 "trsvcid": "$NVMF_PORT", 00:07:15.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:15.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:15.399 "hdgst": ${hdgst:-false}, 00:07:15.399 "ddgst": ${ddgst:-false} 00:07:15.399 }, 00:07:15.399 "method": "bdev_nvme_attach_controller" 00:07:15.399 } 00:07:15.399 EOF 00:07:15.399 )") 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:15.399 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:15.399 "params": { 00:07:15.399 "name": "Nvme0", 00:07:15.399 "trtype": "tcp", 00:07:15.399 "traddr": "10.0.0.2", 00:07:15.399 "adrfam": "ipv4", 00:07:15.399 "trsvcid": "4420", 00:07:15.399 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:15.399 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:15.399 "hdgst": false, 00:07:15.399 "ddgst": false 00:07:15.399 }, 00:07:15.399 "method": "bdev_nvme_attach_controller" 00:07:15.399 }' 00:07:15.399 [2024-07-25 23:13:13.087782] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:15.399 [2024-07-25 23:13:13.087854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275474 ] 00:07:15.399 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.399 [2024-07-25 23:13:13.121055] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:15.657 [2024-07-25 23:13:13.150337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.657 [2024-07-25 23:13:13.237818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.915 Running I/O for 10 seconds... 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:15.915 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:16.173 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:16.173 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:16.173 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:16.173 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:16.173 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.173 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.173 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.173 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:07:16.173 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:07:16.173 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:16.173 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:16.173 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:16.173 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:16.173 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.173 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.173 [2024-07-25 23:13:13.794825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.173 [2024-07-25 23:13:13.794893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.173 [2024-07-25 23:13:13.794923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.173 [2024-07-25 23:13:13.794939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.173 [2024-07-25 23:13:13.794956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.173 [2024-07-25 23:13:13.794971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.173 [2024-07-25 23:13:13.794987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.173 [2024-07-25 23:13:13.795002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.173 [2024-07-25 23:13:13.795019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.173 [2024-07-25 23:13:13.795033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.173 [2024-07-25 23:13:13.795049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.173 [2024-07-25 23:13:13.795073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.173 [2024-07-25 23:13:13.795139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.173 [2024-07-25 23:13:13.795157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.173 [2024-07-25 23:13:13.795173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.173 [2024-07-25 23:13:13.795187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.173 [2024-07-25 23:13:13.795203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.173 [2024-07-25 23:13:13.795217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.173 [2024-07-25 23:13:13.795233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.173 [2024-07-25 23:13:13.795247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.173 [2024-07-25 23:13:13.795263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.173 [2024-07-25 23:13:13.795277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.173 [2024-07-25 23:13:13.795293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.173 [2024-07-25 23:13:13.795307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.173 [2024-07-25 23:13:13.795323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.795976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.795992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.796006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.796021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.796035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.796051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.796072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.796089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.796103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.796120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.796134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.796150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.796164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.796180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.796194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.796209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.796224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.796239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.796253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.796270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.796287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.796303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.796317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.796333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.796347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.796363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.796379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.796395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.796409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.796425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.796438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.796454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.796469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.174 [2024-07-25 23:13:13.796485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.174 [2024-07-25 23:13:13.796499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.174 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.175 [2024-07-25 23:13:13.796520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.175 [2024-07-25 23:13:13.796534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.175 [2024-07-25 23:13:13.796549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.175 [2024-07-25 23:13:13.796563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.175 [2024-07-25 23:13:13.796579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.175 [2024-07-25 23:13:13.796593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.175 [2024-07-25 23:13:13.796608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.175 [2024-07-25 23:13:13.796622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.175 [2024-07-25 23:13:13.796638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.175 [2024-07-25 23:13:13.796652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.175 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:16.175 [2024-07-25 23:13:13.796671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.175 [2024-07-25 23:13:13.796685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.175 [2024-07-25 23:13:13.796701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.175 [2024-07-25 23:13:13.796715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.175 [2024-07-25 23:13:13.796730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.175 [2024-07-25 23:13:13.796744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.175 [2024-07-25 23:13:13.796760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.175 [2024-07-25 23:13:13.796773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.175 [2024-07-25 23:13:13.796790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.175 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.175 [2024-07-25 23:13:13.796804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.175 [2024-07-25 23:13:13.796820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.175 [2024-07-25 23:13:13.796834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.175 [2024-07-25 23:13:13.796849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.175 [2024-07-25 23:13:13.796863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.175 [2024-07-25 23:13:13.796879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.175 [2024-07-25 23:13:13.796893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.175 [2024-07-25 23:13:13.796909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a965f0 is same 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.175 with the state(5) to be set 00:07:16.175 [2024-07-25 23:13:13.796995] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a965f0 was disconnected and freed. reset controller. 00:07:16.175 [2024-07-25 23:13:13.798220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:16.175 task offset: 81792 on job bdev=Nvme0n1 fails 00:07:16.175 00:07:16.175 Latency(us) 00:07:16.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.175 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:16.175 Job: Nvme0n1 ended in about 0.38 seconds with error 00:07:16.175 Verification LBA range: start 0x0 length 0x400 00:07:16.175 Nvme0n1 : 0.38 1505.95 94.12 167.33 0.00 37112.19 6844.87 33787.45 00:07:16.175 =================================================================================================================== 00:07:16.175 Total : 1505.95 94.12 167.33 0.00 37112.19 6844.87 33787.45 00:07:16.175 [2024-07-25 23:13:13.800234] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.175 [2024-07-25 23:13:13.800266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1664b50 (9): Bad file descriptor 00:07:16.175 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.175 23:13:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:16.175 [2024-07-25 23:13:13.845542] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:17.106 23:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1275474 00:07:17.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1275474) - No such process 00:07:17.106 23:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:17.106 23:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:17.106 23:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:17.106 23:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:17.106 23:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:17.106 23:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:17.106 23:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:17.106 23:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:17.106 { 00:07:17.106 "params": { 00:07:17.106 "name": "Nvme$subsystem", 00:07:17.106 "trtype": "$TEST_TRANSPORT", 00:07:17.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:17.106 "adrfam": "ipv4", 00:07:17.106 "trsvcid": "$NVMF_PORT", 00:07:17.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:17.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:17.106 "hdgst": ${hdgst:-false}, 00:07:17.106 "ddgst": ${ddgst:-false} 00:07:17.106 }, 00:07:17.106 "method": "bdev_nvme_attach_controller" 00:07:17.106 } 00:07:17.106 EOF 00:07:17.106 )") 00:07:17.106 23:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:17.106 23:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:17.106 23:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:17.106 23:13:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:17.106 "params": { 00:07:17.106 "name": "Nvme0", 00:07:17.106 "trtype": "tcp", 00:07:17.106 "traddr": "10.0.0.2", 00:07:17.106 "adrfam": "ipv4", 00:07:17.106 "trsvcid": "4420", 00:07:17.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:17.106 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:17.106 "hdgst": false, 00:07:17.106 "ddgst": false 00:07:17.106 }, 00:07:17.106 "method": "bdev_nvme_attach_controller" 00:07:17.106 }' 00:07:17.364 [2024-07-25 23:13:14.853971] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:17.364 [2024-07-25 23:13:14.854078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275747 ] 00:07:17.364 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.364 [2024-07-25 23:13:14.885508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:17.364 [2024-07-25 23:13:14.914676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.364 [2024-07-25 23:13:15.003453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.621 Running I/O for 1 seconds... 00:07:18.992 00:07:18.992 Latency(us) 00:07:18.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.992 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:18.992 Verification LBA range: start 0x0 length 0x400 00:07:18.992 Nvme0n1 : 1.03 1684.90 105.31 0.00 0.00 37366.47 5898.24 32816.55 00:07:18.992 =================================================================================================================== 00:07:18.992 Total : 1684.90 105.31 0.00 0.00 37366.47 5898.24 32816.55 00:07:18.992 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:18.992 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:18.992 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:18.992 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:18.992 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:18.992 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:18.993 rmmod nvme_tcp 00:07:18.993 rmmod nvme_fabrics 00:07:18.993 rmmod nvme_keyring 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1275421 ']' 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1275421 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1275421 ']' 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1275421 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1275421 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1275421' 00:07:18.993 killing process with pid 1275421 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1275421 00:07:18.993 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1275421 00:07:19.250 [2024-07-25 23:13:16.870792] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:19.250 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:19.250 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:19.250 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:19.250 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:19.250 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:19.250 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.250 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.250 23:13:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.782 23:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:21.782 23:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:21.782 00:07:21.782 real 0m8.483s 00:07:21.782 user 0m19.281s 00:07:21.782 sys 0m2.573s 00:07:21.782 23:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.782 23:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.782 ************************************ 00:07:21.782 END TEST nvmf_host_management 00:07:21.782 ************************************ 00:07:21.782 23:13:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:21.782 23:13:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:21.782 23:13:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.782 23:13:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:21.782 ************************************ 00:07:21.782 START TEST nvmf_lvol 00:07:21.782 ************************************ 00:07:21.782 23:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:21.782 * Looking for test storage... 00:07:21.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:07:21.782 23:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:23.684 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:23.684 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:23.684 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:23.684 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.684 23:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.684 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.684 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.684 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:23.684 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.684 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.684 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.684 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:23.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:07:23.684 00:07:23.684 --- 10.0.0.2 ping statistics --- 00:07:23.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.684 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:07:23.684 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:07:23.684 00:07:23.684 --- 10.0.0.1 ping statistics --- 00:07:23.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.685 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1277834 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1277834 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1277834 ']' 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.685 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:23.685 [2024-07-25 23:13:21.178668] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:23.685 [2024-07-25 23:13:21.178747] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.685 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.685 [2024-07-25 23:13:21.215741] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.685 [2024-07-25 23:13:21.248611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:23.685 [2024-07-25 23:13:21.339624] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.685 [2024-07-25 23:13:21.339691] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.685 [2024-07-25 23:13:21.339708] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.685 [2024-07-25 23:13:21.339722] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.685 [2024-07-25 23:13:21.339733] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.685 [2024-07-25 23:13:21.339823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.685 [2024-07-25 23:13:21.339893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.685 [2024-07-25 23:13:21.339896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.943 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.943 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:23.943 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:23.943 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:23.943 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:23.943 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.943 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:24.201 [2024-07-25 23:13:21.712636] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.201 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:24.459 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:24.459 23:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:24.716 23:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:24.716 23:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:24.973 23:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:25.231 23:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=529b8426-1b3f-42db-bf4d-f2a453d4af6f 00:07:25.231 23:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 529b8426-1b3f-42db-bf4d-f2a453d4af6f lvol 20 00:07:25.488 23:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=74c2c8c0-6eb6-4aa3-bd6e-95db185022d8 00:07:25.488 23:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:25.746 23:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 74c2c8c0-6eb6-4aa3-bd6e-95db185022d8 00:07:26.003 23:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:26.260 [2024-07-25 23:13:23.758630] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.260 23:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:26.518 23:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1278252 00:07:26.518 23:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:26.518 23:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:26.518 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.451 23:13:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 74c2c8c0-6eb6-4aa3-bd6e-95db185022d8 MY_SNAPSHOT 00:07:27.708 23:13:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f464cbbb-6122-45d9-9fdc-36a2ff224eb6 00:07:27.708 23:13:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 74c2c8c0-6eb6-4aa3-bd6e-95db185022d8 30 00:07:27.966 23:13:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f464cbbb-6122-45d9-9fdc-36a2ff224eb6 MY_CLONE 00:07:28.223 23:13:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6027671f-2f0a-407f-8e50-cd551ef950f8 00:07:28.223 23:13:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6027671f-2f0a-407f-8e50-cd551ef950f8 00:07:29.156 23:13:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1278252 00:07:37.260 Initializing NVMe Controllers 00:07:37.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:37.260 Controller IO queue size 128, less than required. 00:07:37.260 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:37.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:37.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:37.260 Initialization complete. Launching workers. 00:07:37.260 ======================================================== 00:07:37.260 Latency(us) 00:07:37.260 Device Information : IOPS MiB/s Average min max 00:07:37.260 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10706.20 41.82 11958.01 2374.33 84237.22 00:07:37.260 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10651.30 41.61 12026.50 1771.62 60522.91 00:07:37.260 ======================================================== 00:07:37.260 Total : 21357.50 83.43 11992.17 1771.62 84237.22 00:07:37.260 00:07:37.260 23:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:37.260 23:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 74c2c8c0-6eb6-4aa3-bd6e-95db185022d8 00:07:37.260 23:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 529b8426-1b3f-42db-bf4d-f2a453d4af6f 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:37.548 rmmod nvme_tcp 00:07:37.548 rmmod nvme_fabrics 00:07:37.548 rmmod nvme_keyring 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1277834 ']' 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1277834 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1277834 ']' 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1277834 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:37.548 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.806 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1277834 00:07:37.806 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.806 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.806 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1277834' 00:07:37.806 killing process with pid 1277834 00:07:37.806 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1277834 00:07:37.806 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1277834 00:07:38.064 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:38.064 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:38.064 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:38.064 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:38.064 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:38.064 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.064 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.064 23:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.971 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:39.971 00:07:39.971 real 0m18.625s 00:07:39.971 user 1m2.130s 00:07:39.971 sys 0m6.125s 00:07:39.971 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.971 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:39.971 ************************************ 00:07:39.971 END TEST nvmf_lvol 00:07:39.971 ************************************ 00:07:39.971 23:13:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:39.971 23:13:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:39.971 23:13:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.971 23:13:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:39.971 ************************************ 00:07:39.971 START TEST nvmf_lvs_grow 00:07:39.971 ************************************ 00:07:39.971 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:40.230 * Looking for test storage... 00:07:40.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:40.230 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.231 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:40.231 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:40.231 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:40.231 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.231 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:40.231 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:40.231 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:40.231 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.231 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.231 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.231 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:40.231 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:40.231 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:07:40.231 23:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:42.132 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:42.133 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:42.133 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:42.133 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:42.133 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:42.133 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.391 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.391 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.391 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:42.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:07:42.391 00:07:42.391 --- 10.0.0.2 ping statistics --- 00:07:42.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.391 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:07:42.391 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:07:42.392 00:07:42.392 --- 10.0.0.1 ping statistics --- 00:07:42.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.392 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1281525 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1281525 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1281525 ']' 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.392 23:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.392 [2024-07-25 23:13:39.953181] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:42.392 [2024-07-25 23:13:39.953276] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.392 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.392 [2024-07-25 23:13:39.991054] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:42.392 [2024-07-25 23:13:40.023392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.650 [2024-07-25 23:13:40.121974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.650 [2024-07-25 23:13:40.122034] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.650 [2024-07-25 23:13:40.122050] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.650 [2024-07-25 23:13:40.122070] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.650 [2024-07-25 23:13:40.122083] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.650 [2024-07-25 23:13:40.122119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.650 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.650 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:42.650 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:42.650 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.650 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.650 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.650 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:42.908 [2024-07-25 23:13:40.490884] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.908 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:42.908 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.908 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.908 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.908 ************************************ 00:07:42.908 START TEST lvs_grow_clean 00:07:42.908 ************************************ 00:07:42.908 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:42.908 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:42.908 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:42.909 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:42.909 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:42.909 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:42.909 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:42.909 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.909 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.909 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:43.167 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:43.167 23:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:43.425 23:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a4735928-d0b0-4d39-95af-0b3e0bb98409 00:07:43.425 23:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4735928-d0b0-4d39-95af-0b3e0bb98409 00:07:43.425 23:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:43.683 23:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:43.683 23:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:43.683 23:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a4735928-d0b0-4d39-95af-0b3e0bb98409 lvol 150 00:07:43.942 23:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4ea40c8a-f60e-4cb5-9cf1-bddfc6f29785 00:07:43.942 23:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:43.942 23:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:44.199 [2024-07-25 23:13:41.808330] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:44.199 [2024-07-25 23:13:41.808453] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:44.199 true 00:07:44.199 23:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4735928-d0b0-4d39-95af-0b3e0bb98409 00:07:44.199 23:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:44.457 23:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:44.457 23:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:44.714 23:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4ea40c8a-f60e-4cb5-9cf1-bddfc6f29785 00:07:44.973 23:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:45.231 [2024-07-25 23:13:42.803456] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.231 23:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:45.489 23:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1281967 00:07:45.489 23:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:45.489 23:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:45.489 23:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1281967 /var/tmp/bdevperf.sock 00:07:45.489 23:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1281967 ']' 00:07:45.489 23:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:45.489 23:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.489 23:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:45.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:45.489 23:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.489 23:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:45.489 [2024-07-25 23:13:43.107731] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:07:45.489 [2024-07-25 23:13:43.107816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281967 ] 00:07:45.489 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.489 [2024-07-25 23:13:43.145073] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:45.489 [2024-07-25 23:13:43.174591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.747 [2024-07-25 23:13:43.266557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.747 23:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.747 23:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:45.747 23:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:46.004 Nvme0n1 00:07:46.004 23:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:46.260 [ 00:07:46.260 { 00:07:46.260 "name": "Nvme0n1", 00:07:46.260 "aliases": [ 00:07:46.260 "4ea40c8a-f60e-4cb5-9cf1-bddfc6f29785" 00:07:46.261 ], 00:07:46.261 "product_name": "NVMe disk", 00:07:46.261 "block_size": 4096, 00:07:46.261 "num_blocks": 38912, 00:07:46.261 "uuid": "4ea40c8a-f60e-4cb5-9cf1-bddfc6f29785", 00:07:46.261 "assigned_rate_limits": { 00:07:46.261 "rw_ios_per_sec": 0, 00:07:46.261 "rw_mbytes_per_sec": 0, 00:07:46.261 "r_mbytes_per_sec": 0, 00:07:46.261 "w_mbytes_per_sec": 0 00:07:46.261 }, 00:07:46.261 "claimed": false, 00:07:46.261 "zoned": false, 00:07:46.261 "supported_io_types": { 00:07:46.261 "read": true, 00:07:46.261 "write": true, 00:07:46.261 "unmap": true, 00:07:46.261 "flush": true, 00:07:46.261 "reset": true, 00:07:46.261 "nvme_admin": true, 00:07:46.261 "nvme_io": true, 00:07:46.261 "nvme_io_md": false, 00:07:46.261 "write_zeroes": true, 00:07:46.261 "zcopy": false, 00:07:46.261 "get_zone_info": false, 00:07:46.261 "zone_management": false, 00:07:46.261 "zone_append": false, 00:07:46.261 "compare": true, 00:07:46.261 "compare_and_write": true, 00:07:46.261 "abort": true, 00:07:46.261 "seek_hole": false, 00:07:46.261 "seek_data": false, 00:07:46.261 "copy": true, 00:07:46.261 "nvme_iov_md": false 00:07:46.261 }, 00:07:46.261 "memory_domains": [ 00:07:46.261 { 00:07:46.261 "dma_device_id": "system", 00:07:46.261 "dma_device_type": 1 00:07:46.261 } 00:07:46.261 ], 00:07:46.261 "driver_specific": { 00:07:46.261 "nvme": [ 00:07:46.261 { 00:07:46.261 "trid": { 00:07:46.261 "trtype": "TCP", 00:07:46.261 "adrfam": "IPv4", 00:07:46.261 "traddr": "10.0.0.2", 00:07:46.261 "trsvcid": "4420", 00:07:46.261 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:46.261 }, 00:07:46.261 "ctrlr_data": { 00:07:46.261 "cntlid": 1, 00:07:46.261 "vendor_id": "0x8086", 00:07:46.261 "model_number": "SPDK bdev Controller", 00:07:46.261 "serial_number": "SPDK0", 00:07:46.261 "firmware_revision": "24.09", 00:07:46.261 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:46.261 "oacs": { 00:07:46.261 "security": 0, 00:07:46.261 "format": 0, 00:07:46.261 "firmware": 0, 00:07:46.261 "ns_manage": 0 00:07:46.261 }, 00:07:46.261 "multi_ctrlr": true, 00:07:46.261 "ana_reporting": false 00:07:46.261 }, 00:07:46.261 "vs": { 00:07:46.261 "nvme_version": "1.3" 00:07:46.261 }, 00:07:46.261 "ns_data": { 00:07:46.261 "id": 1, 00:07:46.261 "can_share": true 00:07:46.261 } 00:07:46.261 } 00:07:46.261 ], 00:07:46.261 "mp_policy": "active_passive" 00:07:46.261 } 00:07:46.261 } 00:07:46.261 ] 00:07:46.261 23:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1282097 00:07:46.261 23:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:46.261 23:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:46.519 Running I/O for 10 seconds... 00:07:47.451 Latency(us) 00:07:47.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.451 Nvme0n1 : 1.00 14870.00 58.09 0.00 0.00 0.00 0.00 0.00 00:07:47.451 =================================================================================================================== 00:07:47.451 Total : 14870.00 58.09 0.00 0.00 0.00 0.00 0.00 00:07:47.451 00:07:48.384 23:13:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a4735928-d0b0-4d39-95af-0b3e0bb98409 00:07:48.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.384 Nvme0n1 : 2.00 14901.00 58.21 0.00 0.00 0.00 0.00 0.00 00:07:48.384 =================================================================================================================== 00:07:48.384 Total : 14901.00 58.21 0.00 0.00 0.00 0.00 0.00 00:07:48.384 00:07:48.642 true 00:07:48.642 23:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4735928-d0b0-4d39-95af-0b3e0bb98409 00:07:48.642 23:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:48.899 23:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:48.899 23:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:48.899 23:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1282097 00:07:49.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.464 Nvme0n1 : 3.00 15035.67 58.73 0.00 0.00 0.00 0.00 0.00 00:07:49.464 =================================================================================================================== 00:07:49.464 Total : 15035.67 58.73 0.00 0.00 0.00 0.00 0.00 00:07:49.464 00:07:50.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.397 Nvme0n1 : 4.00 15055.00 58.81 0.00 0.00 0.00 0.00 0.00 00:07:50.397 =================================================================================================================== 00:07:50.397 Total : 15055.00 58.81 0.00 0.00 0.00 0.00 0.00 00:07:50.397 00:07:51.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.771 Nvme0n1 : 5.00 15066.60 58.85 0.00 0.00 0.00 0.00 0.00 00:07:51.771 =================================================================================================================== 00:07:51.771 Total : 15066.60 58.85 0.00 0.00 0.00 0.00 0.00 00:07:51.771 00:07:52.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.337 Nvme0n1 : 6.00 15095.50 58.97 0.00 0.00 0.00 0.00 0.00 00:07:52.337 =================================================================================================================== 00:07:52.337 Total : 15095.50 58.97 0.00 0.00 0.00 0.00 0.00 00:07:52.337 00:07:53.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.710 Nvme0n1 : 7.00 15107.14 59.01 0.00 0.00 0.00 0.00 0.00 00:07:53.710 =================================================================================================================== 00:07:53.710 Total : 15107.14 59.01 0.00 0.00 0.00 0.00 0.00 00:07:53.710 00:07:54.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.644 Nvme0n1 : 8.00 15131.75 59.11 0.00 0.00 0.00 0.00 0.00 00:07:54.644 =================================================================================================================== 00:07:54.644 Total : 15131.75 59.11 0.00 0.00 0.00 0.00 0.00 00:07:54.644 00:07:55.623 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.623 Nvme0n1 : 9.00 15143.78 59.16 0.00 0.00 0.00 0.00 0.00 00:07:55.623 =================================================================================================================== 00:07:55.623 Total : 15143.78 59.16 0.00 0.00 0.00 0.00 0.00 00:07:55.623 00:07:56.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.557 Nvme0n1 : 10.00 15153.40 59.19 0.00 0.00 0.00 0.00 0.00 00:07:56.557 =================================================================================================================== 00:07:56.557 Total : 15153.40 59.19 0.00 0.00 0.00 0.00 0.00 00:07:56.557 00:07:56.557 00:07:56.557 Latency(us) 00:07:56.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.557 Nvme0n1 : 10.01 15154.62 59.20 0.00 0.00 8441.40 5072.97 16796.63 00:07:56.557 =================================================================================================================== 00:07:56.557 Total : 15154.62 59.20 0.00 0.00 8441.40 5072.97 16796.63 00:07:56.557 0 00:07:56.557 23:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1281967 00:07:56.557 23:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1281967 ']' 00:07:56.557 23:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1281967 00:07:56.557 23:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:56.557 23:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.557 23:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1281967 00:07:56.557 23:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:56.557 23:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:56.557 23:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1281967' 00:07:56.557 killing process with pid 1281967 00:07:56.557 23:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1281967 00:07:56.557 Received shutdown signal, test time was about 10.000000 seconds 00:07:56.557 00:07:56.557 Latency(us) 00:07:56.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.557 =================================================================================================================== 00:07:56.557 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:56.557 23:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1281967 00:07:56.815 23:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.072 23:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:57.330 23:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4735928-d0b0-4d39-95af-0b3e0bb98409 00:07:57.330 23:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:57.587 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:57.587 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:57.587 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:57.845 [2024-07-25 23:13:55.315855] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:57.845 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4735928-d0b0-4d39-95af-0b3e0bb98409 00:07:57.845 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:57.845 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4735928-d0b0-4d39-95af-0b3e0bb98409 00:07:57.845 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.845 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.845 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.845 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.845 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.845 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.845 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.845 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:57.845 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4735928-d0b0-4d39-95af-0b3e0bb98409 00:07:58.103 request: 00:07:58.103 { 00:07:58.103 "uuid": "a4735928-d0b0-4d39-95af-0b3e0bb98409", 00:07:58.103 "method": "bdev_lvol_get_lvstores", 00:07:58.103 "req_id": 1 00:07:58.103 } 00:07:58.103 Got JSON-RPC error response 00:07:58.103 response: 00:07:58.103 { 00:07:58.103 "code": -19, 00:07:58.103 "message": "No such device" 00:07:58.103 } 00:07:58.103 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:58.103 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.103 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:58.103 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:58.103 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:58.361 aio_bdev 00:07:58.361 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4ea40c8a-f60e-4cb5-9cf1-bddfc6f29785 00:07:58.361 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=4ea40c8a-f60e-4cb5-9cf1-bddfc6f29785 00:07:58.361 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:58.361 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:58.361 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:58.361 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:58.361 23:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:58.619 23:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4ea40c8a-f60e-4cb5-9cf1-bddfc6f29785 -t 2000 00:07:58.619 [ 00:07:58.619 { 00:07:58.619 "name": "4ea40c8a-f60e-4cb5-9cf1-bddfc6f29785", 00:07:58.619 "aliases": [ 00:07:58.619 "lvs/lvol" 00:07:58.619 ], 00:07:58.619 "product_name": "Logical Volume", 00:07:58.619 "block_size": 4096, 00:07:58.619 "num_blocks": 38912, 00:07:58.619 "uuid": "4ea40c8a-f60e-4cb5-9cf1-bddfc6f29785", 00:07:58.619 "assigned_rate_limits": { 00:07:58.619 "rw_ios_per_sec": 0, 00:07:58.619 "rw_mbytes_per_sec": 0, 00:07:58.619 "r_mbytes_per_sec": 0, 00:07:58.619 "w_mbytes_per_sec": 0 00:07:58.619 }, 00:07:58.619 "claimed": false, 00:07:58.619 "zoned": false, 00:07:58.619 "supported_io_types": { 00:07:58.619 "read": true, 00:07:58.619 "write": true, 00:07:58.619 "unmap": true, 00:07:58.619 "flush": false, 00:07:58.619 "reset": true, 00:07:58.619 "nvme_admin": false, 00:07:58.619 "nvme_io": false, 00:07:58.619 "nvme_io_md": false, 00:07:58.619 "write_zeroes": true, 00:07:58.619 "zcopy": false, 00:07:58.619 "get_zone_info": false, 00:07:58.619 "zone_management": false, 00:07:58.619 "zone_append": false, 00:07:58.619 "compare": false, 00:07:58.619 "compare_and_write": false, 00:07:58.619 "abort": false, 00:07:58.619 "seek_hole": true, 00:07:58.619 "seek_data": true, 00:07:58.619 "copy": false, 00:07:58.619 "nvme_iov_md": false 00:07:58.619 }, 00:07:58.619 "driver_specific": { 00:07:58.619 "lvol": { 00:07:58.619 "lvol_store_uuid": "a4735928-d0b0-4d39-95af-0b3e0bb98409", 00:07:58.619 "base_bdev": "aio_bdev", 00:07:58.619 "thin_provision": false, 00:07:58.619 "num_allocated_clusters": 38, 00:07:58.619 "snapshot": false, 00:07:58.619 "clone": false, 00:07:58.619 "esnap_clone": false 00:07:58.619 } 00:07:58.619 } 00:07:58.619 } 00:07:58.619 ] 00:07:58.877 23:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:58.877 23:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4735928-d0b0-4d39-95af-0b3e0bb98409 00:07:58.877 23:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:58.877 23:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:58.877 23:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4735928-d0b0-4d39-95af-0b3e0bb98409 00:07:58.877 23:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:59.135 23:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:59.135 23:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4ea40c8a-f60e-4cb5-9cf1-bddfc6f29785 00:07:59.393 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4735928-d0b0-4d39-95af-0b3e0bb98409 00:07:59.959 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:59.959 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:00.217 00:08:00.217 real 0m17.150s 00:08:00.217 user 0m16.440s 00:08:00.217 sys 0m1.903s 00:08:00.217 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.217 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:00.217 ************************************ 00:08:00.217 END TEST lvs_grow_clean 00:08:00.217 ************************************ 00:08:00.217 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:00.217 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:00.217 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.217 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:00.217 ************************************ 00:08:00.217 START TEST lvs_grow_dirty 00:08:00.217 ************************************ 00:08:00.218 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:00.218 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:00.218 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:00.218 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:00.218 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:00.218 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:00.218 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:00.218 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:00.218 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:00.218 23:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:00.476 23:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:00.476 23:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:00.734 23:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f68c91c4-9649-492c-a702-d31e9dd323ad 00:08:00.734 23:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f68c91c4-9649-492c-a702-d31e9dd323ad 00:08:00.734 23:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:00.993 23:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:00.993 23:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:00.993 23:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f68c91c4-9649-492c-a702-d31e9dd323ad lvol 150 00:08:01.252 23:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=bdb81ea1-06eb-4df7-ab9a-17c5599c6770 00:08:01.252 23:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:01.252 23:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:01.511 [2024-07-25 23:13:59.045401] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:01.511 [2024-07-25 23:13:59.045507] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:01.511 true 00:08:01.511 23:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f68c91c4-9649-492c-a702-d31e9dd323ad 00:08:01.511 23:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:01.768 23:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:01.768 23:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:02.025 23:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdb81ea1-06eb-4df7-ab9a-17c5599c6770 00:08:02.283 23:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:02.541 [2024-07-25 23:14:00.036456] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.541 23:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.800 23:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1284028 00:08:02.800 23:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:02.800 23:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:02.800 23:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1284028 /var/tmp/bdevperf.sock 00:08:02.800 23:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1284028 ']' 00:08:02.800 23:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:02.800 23:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.800 23:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:02.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:02.800 23:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.800 23:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:02.800 [2024-07-25 23:14:00.333711] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:02.800 [2024-07-25 23:14:00.333797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284028 ] 00:08:02.800 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.800 [2024-07-25 23:14:00.365689] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:02.800 [2024-07-25 23:14:00.395593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.800 [2024-07-25 23:14:00.487740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.059 23:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.059 23:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:03.059 23:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:03.317 Nvme0n1 00:08:03.317 23:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:03.575 [ 00:08:03.575 { 00:08:03.575 "name": "Nvme0n1", 00:08:03.575 "aliases": [ 00:08:03.575 "bdb81ea1-06eb-4df7-ab9a-17c5599c6770" 00:08:03.575 ], 00:08:03.575 "product_name": "NVMe disk", 00:08:03.575 "block_size": 4096, 00:08:03.575 "num_blocks": 38912, 00:08:03.575 "uuid": "bdb81ea1-06eb-4df7-ab9a-17c5599c6770", 00:08:03.575 "assigned_rate_limits": { 00:08:03.575 "rw_ios_per_sec": 0, 00:08:03.575 "rw_mbytes_per_sec": 0, 00:08:03.575 "r_mbytes_per_sec": 0, 00:08:03.575 "w_mbytes_per_sec": 0 00:08:03.575 }, 00:08:03.575 "claimed": false, 00:08:03.575 "zoned": false, 00:08:03.575 "supported_io_types": { 00:08:03.575 "read": true, 00:08:03.575 "write": true, 00:08:03.575 "unmap": true, 00:08:03.575 "flush": true, 00:08:03.575 "reset": true, 00:08:03.575 "nvme_admin": true, 00:08:03.575 "nvme_io": true, 00:08:03.575 "nvme_io_md": false, 00:08:03.575 "write_zeroes": true, 00:08:03.575 "zcopy": false, 00:08:03.575 "get_zone_info": false, 00:08:03.575 "zone_management": false, 00:08:03.575 "zone_append": false, 00:08:03.575 "compare": true, 00:08:03.575 "compare_and_write": true, 00:08:03.575 "abort": true, 00:08:03.575 "seek_hole": false, 00:08:03.575 "seek_data": false, 00:08:03.575 "copy": true, 00:08:03.575 "nvme_iov_md": false 00:08:03.575 }, 00:08:03.575 "memory_domains": [ 00:08:03.575 { 00:08:03.575 "dma_device_id": "system", 00:08:03.575 "dma_device_type": 1 00:08:03.575 } 00:08:03.575 ], 00:08:03.575 "driver_specific": { 00:08:03.575 "nvme": [ 00:08:03.575 { 00:08:03.575 "trid": { 00:08:03.575 "trtype": "TCP", 00:08:03.575 "adrfam": "IPv4", 00:08:03.575 "traddr": "10.0.0.2", 00:08:03.575 "trsvcid": "4420", 00:08:03.575 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:03.575 }, 00:08:03.575 "ctrlr_data": { 00:08:03.575 "cntlid": 1, 00:08:03.575 "vendor_id": "0x8086", 00:08:03.575 "model_number": "SPDK bdev Controller", 00:08:03.575 "serial_number": "SPDK0", 00:08:03.575 "firmware_revision": "24.09", 00:08:03.575 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:03.576 "oacs": { 00:08:03.576 "security": 0, 00:08:03.576 "format": 0, 00:08:03.576 "firmware": 0, 00:08:03.576 "ns_manage": 0 00:08:03.576 }, 00:08:03.576 "multi_ctrlr": true, 00:08:03.576 "ana_reporting": false 00:08:03.576 }, 00:08:03.576 "vs": { 00:08:03.576 "nvme_version": "1.3" 00:08:03.576 }, 00:08:03.576 "ns_data": { 00:08:03.576 "id": 1, 00:08:03.576 "can_share": true 00:08:03.576 } 00:08:03.576 } 00:08:03.576 ], 00:08:03.576 "mp_policy": "active_passive" 00:08:03.576 } 00:08:03.576 } 00:08:03.576 ] 00:08:03.576 23:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1284163 00:08:03.576 23:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:03.576 23:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:03.834 Running I/O for 10 seconds... 00:08:04.767 Latency(us) 00:08:04.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.767 Nvme0n1 : 1.00 15220.00 59.45 0.00 0.00 0.00 0.00 0.00 00:08:04.767 =================================================================================================================== 00:08:04.767 Total : 15220.00 59.45 0.00 0.00 0.00 0.00 0.00 00:08:04.767 00:08:05.701 23:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f68c91c4-9649-492c-a702-d31e9dd323ad 00:08:05.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.701 Nvme0n1 : 2.00 14818.00 57.88 0.00 0.00 0.00 0.00 0.00 00:08:05.701 =================================================================================================================== 00:08:05.701 Total : 14818.00 57.88 0.00 0.00 0.00 0.00 0.00 00:08:05.701 00:08:05.958 true 00:08:05.958 23:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f68c91c4-9649-492c-a702-d31e9dd323ad 00:08:05.958 23:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:06.215 23:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:06.215 23:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:06.215 23:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1284163 00:08:06.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.778 Nvme0n1 : 3.00 14654.67 57.24 0.00 0.00 0.00 0.00 0.00 00:08:06.778 =================================================================================================================== 00:08:06.778 Total : 14654.67 57.24 0.00 0.00 0.00 0.00 0.00 00:08:06.778 00:08:07.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.711 Nvme0n1 : 4.00 14623.00 57.12 0.00 0.00 0.00 0.00 0.00 00:08:07.711 =================================================================================================================== 00:08:07.711 Total : 14623.00 57.12 0.00 0.00 0.00 0.00 0.00 00:08:07.711 00:08:08.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.643 Nvme0n1 : 5.00 14615.20 57.09 0.00 0.00 0.00 0.00 0.00 00:08:08.643 =================================================================================================================== 00:08:08.643 Total : 14615.20 57.09 0.00 0.00 0.00 0.00 0.00 00:08:08.643 00:08:10.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.017 Nvme0n1 : 6.00 14634.00 57.16 0.00 0.00 0.00 0.00 0.00 00:08:10.017 =================================================================================================================== 00:08:10.017 Total : 14634.00 57.16 0.00 0.00 0.00 0.00 0.00 00:08:10.017 00:08:10.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.951 Nvme0n1 : 7.00 14618.86 57.10 0.00 0.00 0.00 0.00 0.00 00:08:10.951 =================================================================================================================== 00:08:10.951 Total : 14618.86 57.10 0.00 0.00 0.00 0.00 0.00 00:08:10.951 00:08:11.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.923 Nvme0n1 : 8.00 14621.50 57.12 0.00 0.00 0.00 0.00 0.00 00:08:11.923 =================================================================================================================== 00:08:11.923 Total : 14621.50 57.12 0.00 0.00 0.00 0.00 0.00 00:08:11.923 00:08:12.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.864 Nvme0n1 : 9.00 14612.00 57.08 0.00 0.00 0.00 0.00 0.00 00:08:12.864 =================================================================================================================== 00:08:12.864 Total : 14612.00 57.08 0.00 0.00 0.00 0.00 0.00 00:08:12.864 00:08:13.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.799 Nvme0n1 : 10.00 14618.00 57.10 0.00 0.00 0.00 0.00 0.00 00:08:13.799 =================================================================================================================== 00:08:13.799 Total : 14618.00 57.10 0.00 0.00 0.00 0.00 0.00 00:08:13.799 00:08:13.799 00:08:13.799 Latency(us) 00:08:13.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.799 Nvme0n1 : 10.01 14617.99 57.10 0.00 0.00 8748.33 6505.05 16893.72 00:08:13.799 =================================================================================================================== 00:08:13.799 Total : 14617.99 57.10 0.00 0.00 8748.33 6505.05 16893.72 00:08:13.799 0 00:08:13.799 23:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1284028 00:08:13.799 23:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1284028 ']' 00:08:13.799 23:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1284028 00:08:13.799 23:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:13.799 23:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.799 23:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1284028 00:08:13.799 23:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:13.799 23:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:13.799 23:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1284028' 00:08:13.799 killing process with pid 1284028 00:08:13.799 23:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1284028 00:08:13.799 Received shutdown signal, test time was about 10.000000 seconds 00:08:13.799 00:08:13.799 Latency(us) 00:08:13.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.799 =================================================================================================================== 00:08:13.799 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:13.799 23:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1284028 00:08:14.058 23:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:14.316 23:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:14.573 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f68c91c4-9649-492c-a702-d31e9dd323ad 00:08:14.573 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1281525 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1281525 00:08:14.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1281525 Killed "${NVMF_APP[@]}" "$@" 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1285498 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1285498 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1285498 ']' 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:14.832 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:14.832 [2024-07-25 23:14:12.453524] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:14.832 [2024-07-25 23:14:12.453623] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.832 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.832 [2024-07-25 23:14:12.492905] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:14.832 [2024-07-25 23:14:12.518761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.090 [2024-07-25 23:14:12.608153] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.090 [2024-07-25 23:14:12.608209] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.090 [2024-07-25 23:14:12.608239] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.090 [2024-07-25 23:14:12.608252] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.090 [2024-07-25 23:14:12.608262] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.090 [2024-07-25 23:14:12.608292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.090 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.090 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:15.090 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:15.090 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:15.090 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:15.090 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.090 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:15.348 [2024-07-25 23:14:12.980459] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:15.348 [2024-07-25 23:14:12.980603] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:15.348 [2024-07-25 23:14:12.980662] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:15.348 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:15.348 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev bdb81ea1-06eb-4df7-ab9a-17c5599c6770 00:08:15.349 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=bdb81ea1-06eb-4df7-ab9a-17c5599c6770 00:08:15.349 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:15.349 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:15.349 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:15.349 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:15.349 23:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:15.607 23:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bdb81ea1-06eb-4df7-ab9a-17c5599c6770 -t 2000 00:08:15.864 [ 00:08:15.864 { 00:08:15.864 "name": "bdb81ea1-06eb-4df7-ab9a-17c5599c6770", 00:08:15.864 "aliases": [ 00:08:15.864 "lvs/lvol" 00:08:15.864 ], 00:08:15.864 "product_name": "Logical Volume", 00:08:15.864 "block_size": 4096, 00:08:15.864 "num_blocks": 38912, 00:08:15.864 "uuid": "bdb81ea1-06eb-4df7-ab9a-17c5599c6770", 00:08:15.864 "assigned_rate_limits": { 00:08:15.864 "rw_ios_per_sec": 0, 00:08:15.864 "rw_mbytes_per_sec": 0, 00:08:15.864 "r_mbytes_per_sec": 0, 00:08:15.864 "w_mbytes_per_sec": 0 00:08:15.864 }, 00:08:15.864 "claimed": false, 00:08:15.864 "zoned": false, 00:08:15.864 "supported_io_types": { 00:08:15.864 "read": true, 00:08:15.864 "write": true, 00:08:15.864 "unmap": true, 00:08:15.864 "flush": false, 00:08:15.864 "reset": true, 00:08:15.864 "nvme_admin": false, 00:08:15.864 "nvme_io": false, 00:08:15.864 "nvme_io_md": false, 00:08:15.864 "write_zeroes": true, 00:08:15.864 "zcopy": false, 00:08:15.864 "get_zone_info": false, 00:08:15.864 "zone_management": false, 00:08:15.864 "zone_append": false, 00:08:15.864 "compare": false, 00:08:15.864 "compare_and_write": false, 00:08:15.864 "abort": false, 00:08:15.864 "seek_hole": true, 00:08:15.864 "seek_data": true, 00:08:15.864 "copy": false, 00:08:15.864 "nvme_iov_md": false 00:08:15.864 }, 00:08:15.864 "driver_specific": { 00:08:15.864 "lvol": { 00:08:15.864 "lvol_store_uuid": "f68c91c4-9649-492c-a702-d31e9dd323ad", 00:08:15.864 "base_bdev": "aio_bdev", 00:08:15.864 "thin_provision": false, 00:08:15.864 "num_allocated_clusters": 38, 00:08:15.864 "snapshot": false, 00:08:15.864 "clone": false, 00:08:15.864 "esnap_clone": false 00:08:15.864 } 00:08:15.864 } 00:08:15.864 } 00:08:15.864 ] 00:08:15.864 23:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:15.864 23:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f68c91c4-9649-492c-a702-d31e9dd323ad 00:08:15.864 23:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:16.122 23:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:16.122 23:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f68c91c4-9649-492c-a702-d31e9dd323ad 00:08:16.122 23:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:16.379 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:16.379 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:16.636 [2024-07-25 23:14:14.309513] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:16.636 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f68c91c4-9649-492c-a702-d31e9dd323ad 00:08:16.637 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:16.637 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f68c91c4-9649-492c-a702-d31e9dd323ad 00:08:16.637 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:16.637 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.637 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:16.637 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.637 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:16.637 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.637 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:16.637 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:16.637 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f68c91c4-9649-492c-a702-d31e9dd323ad 00:08:16.894 request: 00:08:16.895 { 00:08:16.895 "uuid": "f68c91c4-9649-492c-a702-d31e9dd323ad", 00:08:16.895 "method": "bdev_lvol_get_lvstores", 00:08:16.895 "req_id": 1 00:08:16.895 } 00:08:16.895 Got JSON-RPC error response 00:08:16.895 response: 00:08:16.895 { 00:08:16.895 "code": -19, 00:08:16.895 "message": "No such device" 00:08:16.895 } 00:08:16.895 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:16.895 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:16.895 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:16.895 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:16.895 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:17.153 aio_bdev 00:08:17.153 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bdb81ea1-06eb-4df7-ab9a-17c5599c6770 00:08:17.153 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=bdb81ea1-06eb-4df7-ab9a-17c5599c6770 00:08:17.153 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:17.153 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:17.153 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:17.153 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:17.153 23:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:17.411 23:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bdb81ea1-06eb-4df7-ab9a-17c5599c6770 -t 2000 00:08:17.669 [ 00:08:17.669 { 00:08:17.669 "name": "bdb81ea1-06eb-4df7-ab9a-17c5599c6770", 00:08:17.669 "aliases": [ 00:08:17.669 "lvs/lvol" 00:08:17.669 ], 00:08:17.669 "product_name": "Logical Volume", 00:08:17.669 "block_size": 4096, 00:08:17.669 "num_blocks": 38912, 00:08:17.669 "uuid": "bdb81ea1-06eb-4df7-ab9a-17c5599c6770", 00:08:17.669 "assigned_rate_limits": { 00:08:17.669 "rw_ios_per_sec": 0, 00:08:17.669 "rw_mbytes_per_sec": 0, 00:08:17.669 "r_mbytes_per_sec": 0, 00:08:17.669 "w_mbytes_per_sec": 0 00:08:17.669 }, 00:08:17.669 "claimed": false, 00:08:17.669 "zoned": false, 00:08:17.669 "supported_io_types": { 00:08:17.669 "read": true, 00:08:17.669 "write": true, 00:08:17.669 "unmap": true, 00:08:17.669 "flush": false, 00:08:17.669 "reset": true, 00:08:17.669 "nvme_admin": false, 00:08:17.669 "nvme_io": false, 00:08:17.669 "nvme_io_md": false, 00:08:17.669 "write_zeroes": true, 00:08:17.669 "zcopy": false, 00:08:17.669 "get_zone_info": false, 00:08:17.669 "zone_management": false, 00:08:17.669 "zone_append": false, 00:08:17.669 "compare": false, 00:08:17.669 "compare_and_write": false, 00:08:17.669 "abort": false, 00:08:17.669 "seek_hole": true, 00:08:17.669 "seek_data": true, 00:08:17.669 "copy": false, 00:08:17.669 "nvme_iov_md": false 00:08:17.669 }, 00:08:17.669 "driver_specific": { 00:08:17.669 "lvol": { 00:08:17.669 "lvol_store_uuid": "f68c91c4-9649-492c-a702-d31e9dd323ad", 00:08:17.669 "base_bdev": "aio_bdev", 00:08:17.669 "thin_provision": false, 00:08:17.669 "num_allocated_clusters": 38, 00:08:17.669 "snapshot": false, 00:08:17.669 "clone": false, 00:08:17.669 "esnap_clone": false 00:08:17.669 } 00:08:17.669 } 00:08:17.669 } 00:08:17.669 ] 00:08:17.669 23:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:17.669 23:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f68c91c4-9649-492c-a702-d31e9dd323ad 00:08:17.669 23:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:17.927 23:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:17.927 23:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f68c91c4-9649-492c-a702-d31e9dd323ad 00:08:17.927 23:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:18.184 23:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:18.184 23:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bdb81ea1-06eb-4df7-ab9a-17c5599c6770 00:08:18.442 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f68c91c4-9649-492c-a702-d31e9dd323ad 00:08:18.700 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:18.958 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:18.958 00:08:18.958 real 0m18.924s 00:08:18.958 user 0m46.601s 00:08:18.958 sys 0m5.141s 00:08:18.958 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.958 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:18.958 ************************************ 00:08:18.958 END TEST lvs_grow_dirty 00:08:18.958 ************************************ 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:19.216 nvmf_trace.0 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:19.216 rmmod nvme_tcp 00:08:19.216 rmmod nvme_fabrics 00:08:19.216 rmmod nvme_keyring 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1285498 ']' 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1285498 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1285498 ']' 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1285498 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1285498 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1285498' 00:08:19.216 killing process with pid 1285498 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1285498 00:08:19.216 23:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1285498 00:08:19.475 23:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:19.475 23:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:19.475 23:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:19.475 23:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:19.475 23:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:19.475 23:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.475 23:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.475 23:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.379 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:21.379 00:08:21.379 real 0m41.433s 00:08:21.379 user 1m8.708s 00:08:21.379 sys 0m8.976s 00:08:21.379 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.379 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.379 ************************************ 00:08:21.379 END TEST nvmf_lvs_grow 00:08:21.379 ************************************ 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:21.638 ************************************ 00:08:21.638 START TEST nvmf_bdev_io_wait 00:08:21.638 ************************************ 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:21.638 * Looking for test storage... 00:08:21.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:08:21.638 23:14:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:23.542 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:23.542 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:23.542 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:23.542 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:23.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:08:23.542 00:08:23.542 --- 10.0.0.2 ping statistics --- 00:08:23.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.542 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:08:23.542 00:08:23.542 --- 10.0.0.1 ping statistics --- 00:08:23.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.542 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:23.542 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:23.543 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.543 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:23.543 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:23.543 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:23.543 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:23.543 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:23.543 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.800 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1288020 00:08:23.800 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:23.800 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1288020 00:08:23.800 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1288020 ']' 00:08:23.800 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.800 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.800 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.800 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.800 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.800 [2024-07-25 23:14:21.315457] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:23.800 [2024-07-25 23:14:21.315525] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.800 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.800 [2024-07-25 23:14:21.351929] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:23.800 [2024-07-25 23:14:21.382774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.800 [2024-07-25 23:14:21.474720] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.800 [2024-07-25 23:14:21.474781] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.800 [2024-07-25 23:14:21.474806] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.800 [2024-07-25 23:14:21.474822] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.800 [2024-07-25 23:14:21.474834] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.800 [2024-07-25 23:14:21.474911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.800 [2024-07-25 23:14:21.474968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.800 [2024-07-25 23:14:21.475092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.800 [2024-07-25 23:14:21.475098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.800 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.800 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:23.800 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:23.800 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:23.800 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.058 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.058 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:24.058 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.058 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.058 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.058 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:24.058 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.058 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.058 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.058 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.058 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.058 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.058 [2024-07-25 23:14:21.619680] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.058 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.059 Malloc0 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.059 [2024-07-25 23:14:21.689798] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1288047 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1288049 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:24.059 { 00:08:24.059 "params": { 00:08:24.059 "name": "Nvme$subsystem", 00:08:24.059 "trtype": "$TEST_TRANSPORT", 00:08:24.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.059 "adrfam": "ipv4", 00:08:24.059 "trsvcid": "$NVMF_PORT", 00:08:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.059 "hdgst": ${hdgst:-false}, 00:08:24.059 "ddgst": ${ddgst:-false} 00:08:24.059 }, 00:08:24.059 "method": "bdev_nvme_attach_controller" 00:08:24.059 } 00:08:24.059 EOF 00:08:24.059 )") 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1288051 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:24.059 { 00:08:24.059 "params": { 00:08:24.059 "name": "Nvme$subsystem", 00:08:24.059 "trtype": "$TEST_TRANSPORT", 00:08:24.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.059 "adrfam": "ipv4", 00:08:24.059 "trsvcid": "$NVMF_PORT", 00:08:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.059 "hdgst": ${hdgst:-false}, 00:08:24.059 "ddgst": ${ddgst:-false} 00:08:24.059 }, 00:08:24.059 "method": "bdev_nvme_attach_controller" 00:08:24.059 } 00:08:24.059 EOF 00:08:24.059 )") 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1288054 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:24.059 { 00:08:24.059 "params": { 00:08:24.059 "name": "Nvme$subsystem", 00:08:24.059 "trtype": "$TEST_TRANSPORT", 00:08:24.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.059 "adrfam": "ipv4", 00:08:24.059 "trsvcid": "$NVMF_PORT", 00:08:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.059 "hdgst": ${hdgst:-false}, 00:08:24.059 "ddgst": ${ddgst:-false} 00:08:24.059 }, 00:08:24.059 "method": "bdev_nvme_attach_controller" 00:08:24.059 } 00:08:24.059 EOF 00:08:24.059 )") 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:24.059 { 00:08:24.059 "params": { 00:08:24.059 "name": "Nvme$subsystem", 00:08:24.059 "trtype": "$TEST_TRANSPORT", 00:08:24.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.059 "adrfam": "ipv4", 00:08:24.059 "trsvcid": "$NVMF_PORT", 00:08:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.059 "hdgst": ${hdgst:-false}, 00:08:24.059 "ddgst": ${ddgst:-false} 00:08:24.059 }, 00:08:24.059 "method": "bdev_nvme_attach_controller" 00:08:24.059 } 00:08:24.059 EOF 00:08:24.059 )") 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1288047 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:24.059 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:24.059 "params": { 00:08:24.059 "name": "Nvme1", 00:08:24.059 "trtype": "tcp", 00:08:24.059 "traddr": "10.0.0.2", 00:08:24.059 "adrfam": "ipv4", 00:08:24.059 "trsvcid": "4420", 00:08:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.059 "hdgst": false, 00:08:24.059 "ddgst": false 00:08:24.059 }, 00:08:24.059 "method": "bdev_nvme_attach_controller" 00:08:24.059 }' 00:08:24.060 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:24.060 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:24.060 "params": { 00:08:24.060 "name": "Nvme1", 00:08:24.060 "trtype": "tcp", 00:08:24.060 "traddr": "10.0.0.2", 00:08:24.060 "adrfam": "ipv4", 00:08:24.060 "trsvcid": "4420", 00:08:24.060 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.060 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.060 "hdgst": false, 00:08:24.060 "ddgst": false 00:08:24.060 }, 00:08:24.060 "method": "bdev_nvme_attach_controller" 00:08:24.060 }' 00:08:24.060 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:24.060 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:24.060 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:24.060 "params": { 00:08:24.060 "name": "Nvme1", 00:08:24.060 "trtype": "tcp", 00:08:24.060 "traddr": "10.0.0.2", 00:08:24.060 "adrfam": "ipv4", 00:08:24.060 "trsvcid": "4420", 00:08:24.060 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.060 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.060 "hdgst": false, 00:08:24.060 "ddgst": false 00:08:24.060 }, 00:08:24.060 "method": "bdev_nvme_attach_controller" 00:08:24.060 }' 00:08:24.060 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:24.060 23:14:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:24.060 "params": { 00:08:24.060 "name": "Nvme1", 00:08:24.060 "trtype": "tcp", 00:08:24.060 "traddr": "10.0.0.2", 00:08:24.060 "adrfam": "ipv4", 00:08:24.060 "trsvcid": "4420", 00:08:24.060 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.060 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.060 "hdgst": false, 00:08:24.060 "ddgst": false 00:08:24.060 }, 00:08:24.060 "method": "bdev_nvme_attach_controller" 00:08:24.060 }' 00:08:24.060 [2024-07-25 23:14:21.736131] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:24.060 [2024-07-25 23:14:21.736131] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:24.060 [2024-07-25 23:14:21.736132] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:24.060 [2024-07-25 23:14:21.736219] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 23:14:21.736220] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 23:14:21.736220] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:24.060 --proc-type=auto ] 00:08:24.060 --proc-type=auto ] 00:08:24.060 [2024-07-25 23:14:21.737650] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:24.060 [2024-07-25 23:14:21.737720] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:24.317 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.317 [2024-07-25 23:14:21.881670] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:24.317 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.317 [2024-07-25 23:14:21.909518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.317 [2024-07-25 23:14:21.981834] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:24.317 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.317 [2024-07-25 23:14:21.985273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:24.317 [2024-07-25 23:14:22.011819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.575 [2024-07-25 23:14:22.080039] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:24.575 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.575 [2024-07-25 23:14:22.086990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:24.575 [2024-07-25 23:14:22.110177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.575 [2024-07-25 23:14:22.155523] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:24.575 [2024-07-25 23:14:22.185609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.575 [2024-07-25 23:14:22.187455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:24.575 [2024-07-25 23:14:22.253041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:24.832 Running I/O for 1 seconds... 00:08:24.832 Running I/O for 1 seconds... 00:08:24.832 Running I/O for 1 seconds... 00:08:24.832 Running I/O for 1 seconds... 00:08:25.766 00:08:25.766 Latency(us) 00:08:25.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.766 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:25.766 Nvme1n1 : 1.00 78870.81 308.09 0.00 0.00 1616.48 476.35 2269.49 00:08:25.766 =================================================================================================================== 00:08:25.766 Total : 78870.81 308.09 0.00 0.00 1616.48 476.35 2269.49 00:08:25.766 00:08:25.766 Latency(us) 00:08:25.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.766 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:25.766 Nvme1n1 : 1.02 6778.40 26.48 0.00 0.00 18753.82 8398.32 26796.94 00:08:25.766 =================================================================================================================== 00:08:25.766 Total : 6778.40 26.48 0.00 0.00 18753.82 8398.32 26796.94 00:08:25.766 00:08:25.766 Latency(us) 00:08:25.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.766 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:25.766 Nvme1n1 : 1.01 9108.81 35.58 0.00 0.00 13985.29 8592.50 26991.12 00:08:25.766 =================================================================================================================== 00:08:25.766 Total : 9108.81 35.58 0.00 0.00 13985.29 8592.50 26991.12 00:08:26.024 00:08:26.024 Latency(us) 00:08:26.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.024 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:26.024 Nvme1n1 : 1.01 6621.73 25.87 0.00 0.00 19256.83 6796.33 39807.05 00:08:26.024 =================================================================================================================== 00:08:26.024 Total : 6621.73 25.87 0.00 0.00 19256.83 6796.33 39807.05 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1288049 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1288051 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1288054 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:26.281 rmmod nvme_tcp 00:08:26.281 rmmod nvme_fabrics 00:08:26.281 rmmod nvme_keyring 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1288020 ']' 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1288020 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1288020 ']' 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1288020 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1288020 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1288020' 00:08:26.281 killing process with pid 1288020 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1288020 00:08:26.281 23:14:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1288020 00:08:26.538 23:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:26.538 23:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:26.538 23:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:26.538 23:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:26.538 23:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:26.538 23:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.538 23:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.538 23:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.067 00:08:29.067 real 0m7.080s 00:08:29.067 user 0m15.799s 00:08:29.067 sys 0m3.682s 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.067 ************************************ 00:08:29.067 END TEST nvmf_bdev_io_wait 00:08:29.067 ************************************ 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.067 ************************************ 00:08:29.067 START TEST nvmf_queue_depth 00:08:29.067 ************************************ 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:29.067 * Looking for test storage... 00:08:29.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:29.067 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.068 23:14:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.992 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:30.993 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:30.993 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:30.993 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:30.993 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:30.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:08:30.993 00:08:30.993 --- 10.0.0.2 ping statistics --- 00:08:30.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.993 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:08:30.993 00:08:30.993 --- 10.0.0.1 ping statistics --- 00:08:30.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.993 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:30.993 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1290276 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1290276 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1290276 ']' 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.994 [2024-07-25 23:14:28.442852] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:30.994 [2024-07-25 23:14:28.442931] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.994 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.994 [2024-07-25 23:14:28.478025] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:30.994 [2024-07-25 23:14:28.504927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.994 [2024-07-25 23:14:28.593010] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.994 [2024-07-25 23:14:28.593092] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.994 [2024-07-25 23:14:28.593122] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.994 [2024-07-25 23:14:28.593134] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.994 [2024-07-25 23:14:28.593144] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.994 [2024-07-25 23:14:28.593179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:30.994 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.252 [2024-07-25 23:14:28.742248] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.252 Malloc0 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.252 [2024-07-25 23:14:28.801244] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1290305 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1290305 /var/tmp/bdevperf.sock 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1290305 ']' 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:31.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.252 23:14:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.252 [2024-07-25 23:14:28.844276] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:31.252 [2024-07-25 23:14:28.844367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290305 ] 00:08:31.252 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.252 [2024-07-25 23:14:28.876724] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:31.252 [2024-07-25 23:14:28.907269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.511 [2024-07-25 23:14:28.996983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.511 23:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.511 23:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:31.511 23:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:31.511 23:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.511 23:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.768 NVMe0n1 00:08:31.768 23:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.768 23:14:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:31.768 Running I/O for 10 seconds... 00:08:43.982 00:08:43.982 Latency(us) 00:08:43.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.983 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:43.983 Verification LBA range: start 0x0 length 0x4000 00:08:43.983 NVMe0n1 : 10.09 8311.89 32.47 0.00 0.00 122686.19 24272.59 80779.19 00:08:43.983 =================================================================================================================== 00:08:43.983 Total : 8311.89 32.47 0.00 0.00 122686.19 24272.59 80779.19 00:08:43.983 0 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1290305 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1290305 ']' 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1290305 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1290305 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1290305' 00:08:43.983 killing process with pid 1290305 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1290305 00:08:43.983 Received shutdown signal, test time was about 10.000000 seconds 00:08:43.983 00:08:43.983 Latency(us) 00:08:43.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.983 =================================================================================================================== 00:08:43.983 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1290305 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:43.983 rmmod nvme_tcp 00:08:43.983 rmmod nvme_fabrics 00:08:43.983 rmmod nvme_keyring 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1290276 ']' 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1290276 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1290276 ']' 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1290276 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1290276 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1290276' 00:08:43.983 killing process with pid 1290276 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1290276 00:08:43.983 23:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1290276 00:08:43.983 23:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:43.983 23:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:43.983 23:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:43.983 23:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:43.983 23:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:43.983 23:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.983 23:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.983 23:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.549 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:44.549 00:08:44.549 real 0m15.927s 00:08:44.549 user 0m22.549s 00:08:44.549 sys 0m2.940s 00:08:44.549 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.549 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.549 ************************************ 00:08:44.549 END TEST nvmf_queue_depth 00:08:44.549 ************************************ 00:08:44.550 23:14:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:44.550 23:14:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:44.550 23:14:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.550 23:14:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.550 ************************************ 00:08:44.550 START TEST nvmf_target_multipath 00:08:44.550 ************************************ 00:08:44.550 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:44.808 * Looking for test storage... 00:08:44.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.808 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:44.809 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:44.809 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:08:44.809 23:14:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:46.709 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:46.710 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:46.710 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:46.710 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:46.710 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:46.710 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:46.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:08:46.970 00:08:46.970 --- 10.0.0.2 ping statistics --- 00:08:46.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.970 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:08:46.970 00:08:46.970 --- 10.0.0.1 ping statistics --- 00:08:46.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.970 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:46.970 only one NIC for nvmf test 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:46.970 rmmod nvme_tcp 00:08:46.970 rmmod nvme_fabrics 00:08:46.970 rmmod nvme_keyring 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.970 23:14:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.883 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:49.142 00:08:49.142 real 0m4.378s 00:08:49.142 user 0m0.813s 00:08:49.142 sys 0m1.541s 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:49.142 ************************************ 00:08:49.142 END TEST nvmf_target_multipath 00:08:49.142 ************************************ 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.142 ************************************ 00:08:49.142 START TEST nvmf_zcopy 00:08:49.142 ************************************ 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:49.142 * Looking for test storage... 00:08:49.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.142 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:08:49.143 23:14:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:51.055 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:51.055 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:51.055 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.055 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:51.055 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:51.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:08:51.056 00:08:51.056 --- 10.0.0.2 ping statistics --- 00:08:51.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.056 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:08:51.056 00:08:51.056 --- 10.0.0.1 ping statistics --- 00:08:51.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.056 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:51.056 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:51.315 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:51.315 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:51.315 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:51.315 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.315 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1295479 00:08:51.315 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1295479 00:08:51.315 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1295479 ']' 00:08:51.315 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.315 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:51.315 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.315 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.315 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.315 23:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.315 [2024-07-25 23:14:48.852441] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:51.315 [2024-07-25 23:14:48.852520] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.315 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.315 [2024-07-25 23:14:48.889725] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:51.315 [2024-07-25 23:14:48.921660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.315 [2024-07-25 23:14:49.010402] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.315 [2024-07-25 23:14:49.010476] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.315 [2024-07-25 23:14:49.010492] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.315 [2024-07-25 23:14:49.010506] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.315 [2024-07-25 23:14:49.010518] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.315 [2024-07-25 23:14:49.010554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.574 [2024-07-25 23:14:49.155093] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.574 [2024-07-25 23:14:49.171335] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.574 malloc0 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:51.574 { 00:08:51.574 "params": { 00:08:51.574 "name": "Nvme$subsystem", 00:08:51.574 "trtype": "$TEST_TRANSPORT", 00:08:51.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.574 "adrfam": "ipv4", 00:08:51.574 "trsvcid": "$NVMF_PORT", 00:08:51.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.574 "hdgst": ${hdgst:-false}, 00:08:51.574 "ddgst": ${ddgst:-false} 00:08:51.574 }, 00:08:51.574 "method": "bdev_nvme_attach_controller" 00:08:51.574 } 00:08:51.574 EOF 00:08:51.574 )") 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:51.574 23:14:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:51.574 "params": { 00:08:51.574 "name": "Nvme1", 00:08:51.574 "trtype": "tcp", 00:08:51.574 "traddr": "10.0.0.2", 00:08:51.574 "adrfam": "ipv4", 00:08:51.574 "trsvcid": "4420", 00:08:51.574 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.574 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.574 "hdgst": false, 00:08:51.574 "ddgst": false 00:08:51.574 }, 00:08:51.574 "method": "bdev_nvme_attach_controller" 00:08:51.574 }' 00:08:51.574 [2024-07-25 23:14:49.267554] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:08:51.574 [2024-07-25 23:14:49.267622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295512 ] 00:08:51.574 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.833 [2024-07-25 23:14:49.299952] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:51.833 [2024-07-25 23:14:49.327732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.833 [2024-07-25 23:14:49.416810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.093 Running I/O for 10 seconds... 00:09:02.073 00:09:02.073 Latency(us) 00:09:02.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.073 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:02.073 Verification LBA range: start 0x0 length 0x1000 00:09:02.073 Nvme1n1 : 10.05 5841.67 45.64 0.00 0.00 21763.13 910.22 41554.68 00:09:02.073 =================================================================================================================== 00:09:02.073 Total : 5841.67 45.64 0.00 0.00 21763.13 910.22 41554.68 00:09:02.332 23:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1296822 00:09:02.332 23:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:02.332 23:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:02.332 23:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:02.332 23:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:02.332 23:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:02.332 23:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:02.332 23:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:02.332 23:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:02.332 { 00:09:02.332 "params": { 00:09:02.332 "name": "Nvme$subsystem", 00:09:02.332 "trtype": "$TEST_TRANSPORT", 00:09:02.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:02.332 "adrfam": "ipv4", 00:09:02.332 "trsvcid": "$NVMF_PORT", 00:09:02.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:02.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:02.332 "hdgst": ${hdgst:-false}, 00:09:02.332 "ddgst": ${ddgst:-false} 00:09:02.332 }, 00:09:02.332 "method": "bdev_nvme_attach_controller" 00:09:02.332 } 00:09:02.332 EOF 00:09:02.332 )") 00:09:02.332 23:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:02.332 [2024-07-25 23:14:59.898557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.332 [2024-07-25 23:14:59.898605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.332 23:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:02.332 23:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:02.332 23:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:02.332 "params": { 00:09:02.332 "name": "Nvme1", 00:09:02.332 "trtype": "tcp", 00:09:02.332 "traddr": "10.0.0.2", 00:09:02.332 "adrfam": "ipv4", 00:09:02.332 "trsvcid": "4420", 00:09:02.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:02.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:02.332 "hdgst": false, 00:09:02.332 "ddgst": false 00:09:02.332 }, 00:09:02.332 "method": "bdev_nvme_attach_controller" 00:09:02.332 }' 00:09:02.333 [2024-07-25 23:14:59.906509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:14:59.906536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 [2024-07-25 23:14:59.914524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:14:59.914548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 [2024-07-25 23:14:59.922546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:14:59.922571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 [2024-07-25 23:14:59.930564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:14:59.930586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 [2024-07-25 23:14:59.938581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:14:59.938617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 [2024-07-25 23:14:59.938669] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:02.333 [2024-07-25 23:14:59.938727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296822 ] 00:09:02.333 [2024-07-25 23:14:59.946600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:14:59.946620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 [2024-07-25 23:14:59.954621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:14:59.954641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 [2024-07-25 23:14:59.962642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:14:59.962661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.333 [2024-07-25 23:14:59.970665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:14:59.970685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 [2024-07-25 23:14:59.972707] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:02.333 [2024-07-25 23:14:59.978705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:14:59.978729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 [2024-07-25 23:14:59.986729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:14:59.986754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 [2024-07-25 23:14:59.994750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:14:59.994774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 [2024-07-25 23:15:00.002794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:15:00.002827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 [2024-07-25 23:15:00.002889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.333 [2024-07-25 23:15:00.010854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:15:00.010904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 [2024-07-25 23:15:00.018853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:15:00.018897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 [2024-07-25 23:15:00.026839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:15:00.026866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 [2024-07-25 23:15:00.034862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:15:00.034888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 [2024-07-25 23:15:00.042883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:15:00.042909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.333 [2024-07-25 23:15:00.050903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.333 [2024-07-25 23:15:00.050928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.058949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.058986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.066988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.067034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.074972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.074998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.082989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.083021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.091018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.091044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.099031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.099056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.100638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.593 [2024-07-25 23:15:00.107052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.107085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.115132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.115158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.123152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.123186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.131171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.131208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.139182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.139222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.147209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.147248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.155221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.155259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.163241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.163278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.171240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.171264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.179278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.179311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.187309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.187372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.195299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.195322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.203320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.203356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.211382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 23:15:00.211431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 23:15:00.219375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 23:15:00.219413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 23:15:00.227430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 23:15:00.227458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 23:15:00.235451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 23:15:00.235475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 23:15:00.243449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 23:15:00.243474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 23:15:00.251480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 23:15:00.251505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 23:15:00.259505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 23:15:00.259529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 23:15:00.267530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 23:15:00.267554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 23:15:00.275558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 23:15:00.275583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 23:15:00.283573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 23:15:00.283598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 23:15:00.291624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 23:15:00.291652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 23:15:00.299629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 23:15:00.299654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 23:15:00.307654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 23:15:00.307679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 23:15:00.315680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 23:15:00.315705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.869 [2024-07-25 23:15:00.323692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.869 [2024-07-25 23:15:00.323716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.869 [2024-07-25 23:15:00.331728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.869 [2024-07-25 23:15:00.331757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.869 [2024-07-25 23:15:00.339744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.869 [2024-07-25 23:15:00.339769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.869 [2024-07-25 23:15:00.347769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.869 [2024-07-25 23:15:00.347793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.869 [2024-07-25 23:15:00.355792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.869 [2024-07-25 23:15:00.355816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.869 [2024-07-25 23:15:00.363815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.869 [2024-07-25 23:15:00.363840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.869 [2024-07-25 23:15:00.371842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.869 [2024-07-25 23:15:00.371868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.869 [2024-07-25 23:15:00.379865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.869 [2024-07-25 23:15:00.379891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.869 [2024-07-25 23:15:00.387888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.387912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 [2024-07-25 23:15:00.395923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.395949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 [2024-07-25 23:15:00.403932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.403956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 [2024-07-25 23:15:00.411954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.411978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 [2024-07-25 23:15:00.419980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.420007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 [2024-07-25 23:15:00.428001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.428026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 [2024-07-25 23:15:00.436028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.436067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 Running I/O for 5 seconds... 00:09:02.870 [2024-07-25 23:15:00.444048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.444083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 [2024-07-25 23:15:00.456735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.456762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 [2024-07-25 23:15:00.468044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.468086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 [2024-07-25 23:15:00.480818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.480847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 [2024-07-25 23:15:00.493162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.493191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 [2024-07-25 23:15:00.505050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.505087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 [2024-07-25 23:15:00.516954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.516981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 [2024-07-25 23:15:00.528670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.528696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 [2024-07-25 23:15:00.540186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.540213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 [2024-07-25 23:15:00.552096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.552148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 [2024-07-25 23:15:00.564209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.564237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.870 [2024-07-25 23:15:00.576460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.870 [2024-07-25 23:15:00.576488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.589074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.589103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.601360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.601387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.613635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.613662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.625360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.625388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.637584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.637611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.649853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.649879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.662458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.662484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.674715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.674742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.688912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.688938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.700442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.700468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.712829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.712855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.724905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.724931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.736351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.736378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.748126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.748153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.759751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.759782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.771948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.771982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.783935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.783966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.796668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.796700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.809530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.809571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.821283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.821311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.832930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.832958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.845178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.845206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.136 [2024-07-25 23:15:00.857690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.136 [2024-07-25 23:15:00.857727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:00.870199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:00.870228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:00.882788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:00.882815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:00.895289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:00.895317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:00.907371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:00.907402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:00.918991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:00.919018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:00.930662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:00.930688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:00.942107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:00.942138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:00.953886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:00.953913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:00.965768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:00.965794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:00.978109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:00.978136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:00.990307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:00.990335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:01.002333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:01.002382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:01.014007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:01.014034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:01.026114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:01.026142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:01.037858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:01.037901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:01.049321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:01.049362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:01.060784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:01.060811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:01.072783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:01.072823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:01.084226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:01.084254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:01.095512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:01.095538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:01.106949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:01.106976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.394 [2024-07-25 23:15:01.119297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.394 [2024-07-25 23:15:01.119325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.131358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.131386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.142656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.142682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.154585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.154611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.166146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.166174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.179968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.179996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.191454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.191480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.203629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.203656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.215489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.215516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.227430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.227464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.238993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.239021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.250810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.250837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.262880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.262905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.274890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.274916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.286675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.286701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.299770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.299797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.311842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.311868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.323817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.323844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.335619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.652 [2024-07-25 23:15:01.335646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.652 [2024-07-25 23:15:01.347268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.653 [2024-07-25 23:15:01.347295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.653 [2024-07-25 23:15:01.359508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.653 [2024-07-25 23:15:01.359534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.653 [2024-07-25 23:15:01.371469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.653 [2024-07-25 23:15:01.371496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.910 [2024-07-25 23:15:01.383916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.910 [2024-07-25 23:15:01.383943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.910 [2024-07-25 23:15:01.395961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.910 [2024-07-25 23:15:01.395996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.910 [2024-07-25 23:15:01.407837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.910 [2024-07-25 23:15:01.407868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.910 [2024-07-25 23:15:01.420092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.910 [2024-07-25 23:15:01.420118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.431909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.431935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.443658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.443684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.455370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.455402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.467535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.467562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.479685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.479711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.491582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.491609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.503079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.503131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.515255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.515283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.527381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.527417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.539657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.539684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.551023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.551076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.563033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.563097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.575265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.575292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.586865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.586891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.598698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.598723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.610884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.610911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.623246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.623273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.911 [2024-07-25 23:15:01.635607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.911 [2024-07-25 23:15:01.635635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.168 [2024-07-25 23:15:01.647681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.168 [2024-07-25 23:15:01.647708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.168 [2024-07-25 23:15:01.659488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.168 [2024-07-25 23:15:01.659514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.168 [2024-07-25 23:15:01.671576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.671612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.683707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.683742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.695294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.695321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.707183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.707211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.718877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.718902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.730622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.730648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.741930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.741957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.753505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.753533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.765075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.765102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.776519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.776546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.788223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.788250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.800084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.800122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.811795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.811822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.823783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.823809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.835730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.835756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.847012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.847055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.858587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.858614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.870220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.870248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.881723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.881750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.169 [2024-07-25 23:15:01.893599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.169 [2024-07-25 23:15:01.893627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.428 [2024-07-25 23:15:01.905323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.428 [2024-07-25 23:15:01.905366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.428 [2024-07-25 23:15:01.917203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.428 [2024-07-25 23:15:01.917231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.428 [2024-07-25 23:15:01.928862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.428 [2024-07-25 23:15:01.928889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.428 [2024-07-25 23:15:01.941210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.428 [2024-07-25 23:15:01.941238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.428 [2024-07-25 23:15:01.953369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.428 [2024-07-25 23:15:01.953411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.428 [2024-07-25 23:15:01.965475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.428 [2024-07-25 23:15:01.965502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.428 [2024-07-25 23:15:01.977494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.428 [2024-07-25 23:15:01.977520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.428 [2024-07-25 23:15:01.989868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.428 [2024-07-25 23:15:01.989895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.428 [2024-07-25 23:15:02.002691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.428 [2024-07-25 23:15:02.002719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.428 [2024-07-25 23:15:02.014964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.428 [2024-07-25 23:15:02.014991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.428 [2024-07-25 23:15:02.026306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.428 [2024-07-25 23:15:02.026349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.429 [2024-07-25 23:15:02.038191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.429 [2024-07-25 23:15:02.038219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.429 [2024-07-25 23:15:02.049553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.429 [2024-07-25 23:15:02.049579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.429 [2024-07-25 23:15:02.061966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.429 [2024-07-25 23:15:02.061997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.429 [2024-07-25 23:15:02.074385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.429 [2024-07-25 23:15:02.074412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.429 [2024-07-25 23:15:02.085927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.429 [2024-07-25 23:15:02.085954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.429 [2024-07-25 23:15:02.097497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.429 [2024-07-25 23:15:02.097524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.429 [2024-07-25 23:15:02.109413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.429 [2024-07-25 23:15:02.109440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.429 [2024-07-25 23:15:02.121610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.429 [2024-07-25 23:15:02.121637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.429 [2024-07-25 23:15:02.133277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.429 [2024-07-25 23:15:02.133305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.429 [2024-07-25 23:15:02.144957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.429 [2024-07-25 23:15:02.144984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.157319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.157348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.169259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.169287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.181143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.181171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.193256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.193284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.205709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.205736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.217760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.217791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.229491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.229517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.241959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.241984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.253792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.253818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.265705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.265732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.277623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.277649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.289589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.289615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.301372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.301398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.312829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.312855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.324401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.324427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.335962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.335988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.347527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.347553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.359839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.359865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.371596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.371623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.383513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.383540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.395105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.395132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.688 [2024-07-25 23:15:02.406881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.688 [2024-07-25 23:15:02.406907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.418836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.418863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.430962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.430989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.442848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.442875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.454747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.454773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.465977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.466004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.477588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.477614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.489504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.489530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.500770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.500796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.511908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.511933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.525420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.525446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.536697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.536723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.547930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.547956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.561468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.561494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.572812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.572846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.584879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.584905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.596493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.596519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.609849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.609875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.621169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.621197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.632883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.632910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.644328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.644370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.656429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.656455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.948 [2024-07-25 23:15:02.668570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.948 [2024-07-25 23:15:02.668595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.207 [2024-07-25 23:15:02.680566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.207 [2024-07-25 23:15:02.680593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.207 [2024-07-25 23:15:02.692423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.207 [2024-07-25 23:15:02.692449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.207 [2024-07-25 23:15:02.704159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.207 [2024-07-25 23:15:02.704186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.207 [2024-07-25 23:15:02.716241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.207 [2024-07-25 23:15:02.716269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.207 [2024-07-25 23:15:02.728518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.207 [2024-07-25 23:15:02.728544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.207 [2024-07-25 23:15:02.740263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.207 [2024-07-25 23:15:02.740290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.207 [2024-07-25 23:15:02.752132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.207 [2024-07-25 23:15:02.752159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.207 [2024-07-25 23:15:02.764263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.207 [2024-07-25 23:15:02.764290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.207 [2024-07-25 23:15:02.775873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.207 [2024-07-25 23:15:02.775914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-07-25 23:15:02.787640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-07-25 23:15:02.787666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-07-25 23:15:02.799622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-07-25 23:15:02.799656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-07-25 23:15:02.811850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-07-25 23:15:02.811876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-07-25 23:15:02.823792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-07-25 23:15:02.823818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-07-25 23:15:02.835240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-07-25 23:15:02.835266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-07-25 23:15:02.846597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-07-25 23:15:02.846622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-07-25 23:15:02.858213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-07-25 23:15:02.858240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-07-25 23:15:02.869703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-07-25 23:15:02.869728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-07-25 23:15:02.880947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-07-25 23:15:02.880974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-07-25 23:15:02.892517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-07-25 23:15:02.892544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-07-25 23:15:02.904198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-07-25 23:15:02.904226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-07-25 23:15:02.916162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-07-25 23:15:02.916190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-07-25 23:15:02.928105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-07-25 23:15:02.928133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:02.940109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:02.940137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:02.951537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:02.951564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:02.963423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:02.963450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:02.975449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:02.975476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:02.987542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:02.987568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:02.999681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:02.999708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:03.011288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:03.011315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:03.022889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:03.022922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:03.034475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:03.034503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:03.046595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:03.046621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:03.058783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:03.058814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:03.071022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:03.071053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:03.083155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:03.083186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:03.095183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:03.095210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:03.108758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:03.108790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:03.119592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:03.119619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:03.130773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:03.130800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:03.143284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:03.143312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:03.155442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:03.155469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:03.168932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:03.168958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-07-25 23:15:03.180080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-07-25 23:15:03.180107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.192418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.192446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.204155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.204184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.215764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.215791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.227705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.227732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.239474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.239500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.251513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.251546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.263257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.263284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.275154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.275180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.287243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.287271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.301080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.301108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.312520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.312546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.323748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.323774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.335005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.335037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.346940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.346966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.358185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.358211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.370680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.370706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.382147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.382176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.394115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.394142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.405832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.405864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.417556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.417582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.428873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.428899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.441008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.441034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.733 [2024-07-25 23:15:03.452848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.733 [2024-07-25 23:15:03.452874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.465168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.465197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.477622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.477656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.488957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.488988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.501181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.501209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.512905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.512932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.524100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.524127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.536126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.536153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.548507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.548533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.560691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.560717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.572864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.572893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.584405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.584431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.595856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.595882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.607668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.607694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.619142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.619181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.631193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.631220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.642927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.642952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.654932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.654958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.668421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.668447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.679394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.679420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.691890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.691916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.703737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.703764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.995 [2024-07-25 23:15:03.715521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.995 [2024-07-25 23:15:03.715549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.255 [2024-07-25 23:15:03.727314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.255 [2024-07-25 23:15:03.727357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.255 [2024-07-25 23:15:03.739028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.255 [2024-07-25 23:15:03.739078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.255 [2024-07-25 23:15:03.751273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.255 [2024-07-25 23:15:03.751300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.255 [2024-07-25 23:15:03.762905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.255 [2024-07-25 23:15:03.762931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.255 [2024-07-25 23:15:03.774384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.255 [2024-07-25 23:15:03.774426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.255 [2024-07-25 23:15:03.786112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.255 [2024-07-25 23:15:03.786139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.255 [2024-07-25 23:15:03.797192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.255 [2024-07-25 23:15:03.797219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.256 [2024-07-25 23:15:03.808695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.256 [2024-07-25 23:15:03.808721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.256 [2024-07-25 23:15:03.820136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.256 [2024-07-25 23:15:03.820163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.256 [2024-07-25 23:15:03.831445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.256 [2024-07-25 23:15:03.831486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.256 [2024-07-25 23:15:03.843164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.256 [2024-07-25 23:15:03.843192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.256 [2024-07-25 23:15:03.854870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.256 [2024-07-25 23:15:03.854896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.256 [2024-07-25 23:15:03.866921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.256 [2024-07-25 23:15:03.866948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.256 [2024-07-25 23:15:03.878837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.256 [2024-07-25 23:15:03.878863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.256 [2024-07-25 23:15:03.890788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.256 [2024-07-25 23:15:03.890814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.256 [2024-07-25 23:15:03.904699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.256 [2024-07-25 23:15:03.904727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.256 [2024-07-25 23:15:03.916679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.256 [2024-07-25 23:15:03.916705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.256 [2024-07-25 23:15:03.928629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.256 [2024-07-25 23:15:03.928655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.256 [2024-07-25 23:15:03.940206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.256 [2024-07-25 23:15:03.940234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.256 [2024-07-25 23:15:03.951863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.256 [2024-07-25 23:15:03.951890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.256 [2024-07-25 23:15:03.963669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.256 [2024-07-25 23:15:03.963695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.256 [2024-07-25 23:15:03.975409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.256 [2024-07-25 23:15:03.975436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.515 [2024-07-25 23:15:03.987081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.515 [2024-07-25 23:15:03.987109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.515 [2024-07-25 23:15:03.998143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.515 [2024-07-25 23:15:03.998171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.515 [2024-07-25 23:15:04.010460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.515 [2024-07-25 23:15:04.010486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.515 [2024-07-25 23:15:04.021869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.515 [2024-07-25 23:15:04.021897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.515 [2024-07-25 23:15:04.033733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.515 [2024-07-25 23:15:04.033759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.515 [2024-07-25 23:15:04.044707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.515 [2024-07-25 23:15:04.044733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.515 [2024-07-25 23:15:04.056492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.515 [2024-07-25 23:15:04.056519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.515 [2024-07-25 23:15:04.069404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.515 [2024-07-25 23:15:04.069445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.515 [2024-07-25 23:15:04.080946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.515 [2024-07-25 23:15:04.080972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.515 [2024-07-25 23:15:04.092790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.515 [2024-07-25 23:15:04.092815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.515 [2024-07-25 23:15:04.106409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.515 [2024-07-25 23:15:04.106435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.515 [2024-07-25 23:15:04.117745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.515 [2024-07-25 23:15:04.117771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.515 [2024-07-25 23:15:04.129426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.516 [2024-07-25 23:15:04.129452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.516 [2024-07-25 23:15:04.141272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.516 [2024-07-25 23:15:04.141299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.516 [2024-07-25 23:15:04.153121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.516 [2024-07-25 23:15:04.153149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.516 [2024-07-25 23:15:04.165129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.516 [2024-07-25 23:15:04.165157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.516 [2024-07-25 23:15:04.176681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.516 [2024-07-25 23:15:04.176708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.516 [2024-07-25 23:15:04.188623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.516 [2024-07-25 23:15:04.188650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.516 [2024-07-25 23:15:04.200501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.516 [2024-07-25 23:15:04.200527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.516 [2024-07-25 23:15:04.212490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.516 [2024-07-25 23:15:04.212518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.516 [2024-07-25 23:15:04.224691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.516 [2024-07-25 23:15:04.224718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.516 [2024-07-25 23:15:04.237071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.516 [2024-07-25 23:15:04.237099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.249162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.249191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.261197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.261225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.272743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.272771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.284790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.284816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.296493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.296520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.307836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.307862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.318952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.318977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.332650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.332676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.343245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.343272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.355000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.355026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.366261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.366298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.377954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.377985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.389536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.389564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.401499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.401526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.413288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.413315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.425243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.425270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.436784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.436812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.448830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.448858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.460413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.460441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.472247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.472275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.483973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.484000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.776 [2024-07-25 23:15:04.495787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.776 [2024-07-25 23:15:04.495814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.036 [2024-07-25 23:15:04.507732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.036 [2024-07-25 23:15:04.507759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.036 [2024-07-25 23:15:04.519766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.036 [2024-07-25 23:15:04.519792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.036 [2024-07-25 23:15:04.531408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.036 [2024-07-25 23:15:04.531434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.036 [2024-07-25 23:15:04.542732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.542758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.554406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.554432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.565724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.565750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.577373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.577399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.589013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.589072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.600417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.600443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.613988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.614015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.624503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.624530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.635976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.636003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.647477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.647504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.658740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.658767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.670217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.670245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.682868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.682894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.693805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.693831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.706125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.706153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.718011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.718051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.729653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.729679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.742644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.742671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.037 [2024-07-25 23:15:04.753987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.037 [2024-07-25 23:15:04.754013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.297 [2024-07-25 23:15:04.766150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.297 [2024-07-25 23:15:04.766177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.297 [2024-07-25 23:15:04.777617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.297 [2024-07-25 23:15:04.777643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.297 [2024-07-25 23:15:04.790783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.297 [2024-07-25 23:15:04.790810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.297 [2024-07-25 23:15:04.801713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.297 [2024-07-25 23:15:04.801743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.297 [2024-07-25 23:15:04.812852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.297 [2024-07-25 23:15:04.812886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.297 [2024-07-25 23:15:04.826626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.297 [2024-07-25 23:15:04.826653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.297 [2024-07-25 23:15:04.838451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.297 [2024-07-25 23:15:04.838477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.297 [2024-07-25 23:15:04.852038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.297 [2024-07-25 23:15:04.852086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.297 [2024-07-25 23:15:04.863186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.297 [2024-07-25 23:15:04.863213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.297 [2024-07-25 23:15:04.875238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.297 [2024-07-25 23:15:04.875266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.297 [2024-07-25 23:15:04.886897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.297 [2024-07-25 23:15:04.886923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.297 [2024-07-25 23:15:04.898301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.297 [2024-07-25 23:15:04.898327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.297 [2024-07-25 23:15:04.910664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.297 [2024-07-25 23:15:04.910690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.297 [2024-07-25 23:15:04.922324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.297 [2024-07-25 23:15:04.922364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.298 [2024-07-25 23:15:04.934083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.298 [2024-07-25 23:15:04.934111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.298 [2024-07-25 23:15:04.946091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.298 [2024-07-25 23:15:04.946122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.298 [2024-07-25 23:15:04.958114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.298 [2024-07-25 23:15:04.958141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.298 [2024-07-25 23:15:04.969937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.298 [2024-07-25 23:15:04.969963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.298 [2024-07-25 23:15:04.983721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.298 [2024-07-25 23:15:04.983752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.298 [2024-07-25 23:15:04.995005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.298 [2024-07-25 23:15:04.995032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.298 [2024-07-25 23:15:05.011430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.298 [2024-07-25 23:15:05.011458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.023212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.023255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.035250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.035281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.046952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.046987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.060259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.060286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.071904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.071929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.083556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.083582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.095251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.095278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.107227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.107253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.118545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.118571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.130098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.130124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.141536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.141562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.155076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.155103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.166091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.166117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.177729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.177755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.189215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.189241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.200941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.200967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.212830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.212856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.226275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.226303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.237186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.237218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.249402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.249428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.261014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.261054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.558 [2024-07-25 23:15:05.272492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.558 [2024-07-25 23:15:05.272524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 [2024-07-25 23:15:05.284280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.284308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 [2024-07-25 23:15:05.296087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.296115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 [2024-07-25 23:15:05.307764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.307791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 [2024-07-25 23:15:05.319488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.319515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 [2024-07-25 23:15:05.331969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.331996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 [2024-07-25 23:15:05.343686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.343713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 [2024-07-25 23:15:05.355709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.355735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 [2024-07-25 23:15:05.367932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.367968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 [2024-07-25 23:15:05.379946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.379973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 [2024-07-25 23:15:05.391752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.391788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 [2024-07-25 23:15:05.405081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.405118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 [2024-07-25 23:15:05.416364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.416391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 [2024-07-25 23:15:05.428250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.428277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 [2024-07-25 23:15:05.440030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.440073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 [2024-07-25 23:15:05.451330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.451357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 [2024-07-25 23:15:05.462033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.462084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 00:09:07.819 Latency(us) 00:09:07.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.819 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:07.819 Nvme1n1 : 5.01 10737.14 83.88 0.00 0.00 11904.42 4830.25 22136.60 00:09:07.819 =================================================================================================================== 00:09:07.819 Total : 10737.14 83.88 0.00 0.00 11904.42 4830.25 22136.60 00:09:07.819 [2024-07-25 23:15:05.469869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.469893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.819 [2024-07-25 23:15:05.477898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.819 [2024-07-25 23:15:05.477924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.820 [2024-07-25 23:15:05.485937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.820 [2024-07-25 23:15:05.485973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.820 [2024-07-25 23:15:05.494001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.820 [2024-07-25 23:15:05.494053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.820 [2024-07-25 23:15:05.502012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.820 [2024-07-25 23:15:05.502067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.820 [2024-07-25 23:15:05.510032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.820 [2024-07-25 23:15:05.510089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.820 [2024-07-25 23:15:05.518051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.820 [2024-07-25 23:15:05.518103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.820 [2024-07-25 23:15:05.526088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.820 [2024-07-25 23:15:05.526137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.820 [2024-07-25 23:15:05.534102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.820 [2024-07-25 23:15:05.534152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.820 [2024-07-25 23:15:05.542127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.820 [2024-07-25 23:15:05.542175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.080 [2024-07-25 23:15:05.550160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.080 [2024-07-25 23:15:05.550211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.080 [2024-07-25 23:15:05.558174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.080 [2024-07-25 23:15:05.558226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.080 [2024-07-25 23:15:05.566195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.080 [2024-07-25 23:15:05.566245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.080 [2024-07-25 23:15:05.574214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.080 [2024-07-25 23:15:05.574264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.080 [2024-07-25 23:15:05.582228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.080 [2024-07-25 23:15:05.582276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.080 [2024-07-25 23:15:05.590249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.080 [2024-07-25 23:15:05.590301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.080 [2024-07-25 23:15:05.598277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.080 [2024-07-25 23:15:05.598327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.080 [2024-07-25 23:15:05.606278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.080 [2024-07-25 23:15:05.606322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.080 [2024-07-25 23:15:05.614268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.080 [2024-07-25 23:15:05.614293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.080 [2024-07-25 23:15:05.622327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.080 [2024-07-25 23:15:05.622369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.080 [2024-07-25 23:15:05.630355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.080 [2024-07-25 23:15:05.630400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.080 [2024-07-25 23:15:05.638379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.080 [2024-07-25 23:15:05.638427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.080 [2024-07-25 23:15:05.646379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.080 [2024-07-25 23:15:05.646420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.080 [2024-07-25 23:15:05.654393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.080 [2024-07-25 23:15:05.654420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.080 [2024-07-25 23:15:05.662452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.080 [2024-07-25 23:15:05.662499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.080 [2024-07-25 23:15:05.670475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.081 [2024-07-25 23:15:05.670525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.081 [2024-07-25 23:15:05.678470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.081 [2024-07-25 23:15:05.678498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.081 [2024-07-25 23:15:05.686470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.081 [2024-07-25 23:15:05.686494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.081 [2024-07-25 23:15:05.694504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.081 [2024-07-25 23:15:05.694528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1296822) - No such process 00:09:08.081 23:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1296822 00:09:08.081 23:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.081 23:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.081 23:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.081 23:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.081 23:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:08.081 23:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.081 23:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.081 delay0 00:09:08.081 23:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.081 23:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:08.081 23:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.081 23:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.081 23:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.081 23:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:08.081 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.340 [2024-07-25 23:15:05.850153] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:16.466 Initializing NVMe Controllers 00:09:16.466 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:16.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:16.466 Initialization complete. Launching workers. 00:09:16.466 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 266, failed: 17623 00:09:16.466 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 17781, failed to submit 108 00:09:16.466 success 17675, unsuccess 106, failed 0 00:09:16.466 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:16.466 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:16.466 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:16.466 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:16.466 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:16.466 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:16.466 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:16.466 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:16.466 rmmod nvme_tcp 00:09:16.466 rmmod nvme_fabrics 00:09:16.466 rmmod nvme_keyring 00:09:16.466 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:16.466 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:16.466 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:16.466 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1295479 ']' 00:09:16.466 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1295479 00:09:16.466 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1295479 ']' 00:09:16.466 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1295479 00:09:16.466 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:16.466 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:16.467 23:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1295479 00:09:16.467 23:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:16.467 23:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:16.467 23:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1295479' 00:09:16.467 killing process with pid 1295479 00:09:16.467 23:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1295479 00:09:16.467 23:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1295479 00:09:16.467 23:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:16.467 23:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:16.467 23:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:16.467 23:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:16.467 23:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:16.467 23:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.467 23:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.467 23:15:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:17.844 00:09:17.844 real 0m28.627s 00:09:17.844 user 0m41.141s 00:09:17.844 sys 0m9.571s 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.844 ************************************ 00:09:17.844 END TEST nvmf_zcopy 00:09:17.844 ************************************ 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:17.844 ************************************ 00:09:17.844 START TEST nvmf_nmic 00:09:17.844 ************************************ 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:17.844 * Looking for test storage... 00:09:17.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:09:17.844 23:15:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:19.746 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:19.746 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:09:19.746 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:19.746 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:19.746 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:19.746 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:19.746 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:19.746 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:09:19.746 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:19.746 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:09:19.746 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:09:19.746 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:09:19.746 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:09:19.746 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:09:19.746 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:09:19.746 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:19.746 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:19.747 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:19.747 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:19.747 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:19.747 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:19.747 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:20.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:09:20.008 00:09:20.008 --- 10.0.0.2 ping statistics --- 00:09:20.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.008 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:09:20.008 00:09:20.008 --- 10.0.0.1 ping statistics --- 00:09:20.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.008 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1300848 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1300848 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1300848 ']' 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.008 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:20.008 [2024-07-25 23:15:17.612090] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:20.008 [2024-07-25 23:15:17.612167] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.008 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.008 [2024-07-25 23:15:17.654141] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:20.008 [2024-07-25 23:15:17.685723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.269 [2024-07-25 23:15:17.784629] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.269 [2024-07-25 23:15:17.784689] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.269 [2024-07-25 23:15:17.784705] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.269 [2024-07-25 23:15:17.784718] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.269 [2024-07-25 23:15:17.784730] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.269 [2024-07-25 23:15:17.784785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.269 [2024-07-25 23:15:17.784841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.269 [2024-07-25 23:15:17.784882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.269 [2024-07-25 23:15:17.784885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:20.269 [2024-07-25 23:15:17.935297] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:20.269 Malloc0 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:20.269 [2024-07-25 23:15:17.988559] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:20.269 test case1: single bdev can't be used in multiple subsystems 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.269 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:20.529 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.529 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:20.529 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.530 23:15:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:20.530 23:15:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.530 23:15:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:20.530 23:15:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:20.530 23:15:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.530 23:15:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:20.530 [2024-07-25 23:15:18.012375] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:20.530 [2024-07-25 23:15:18.012403] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:20.530 [2024-07-25 23:15:18.012418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.530 request: 00:09:20.530 { 00:09:20.530 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:20.530 "namespace": { 00:09:20.530 "bdev_name": "Malloc0", 00:09:20.530 "no_auto_visible": false 00:09:20.530 }, 00:09:20.530 "method": "nvmf_subsystem_add_ns", 00:09:20.530 "req_id": 1 00:09:20.530 } 00:09:20.530 Got JSON-RPC error response 00:09:20.530 response: 00:09:20.530 { 00:09:20.530 "code": -32602, 00:09:20.530 "message": "Invalid parameters" 00:09:20.530 } 00:09:20.530 23:15:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:20.530 23:15:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:20.530 23:15:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:20.530 23:15:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:20.530 Adding namespace failed - expected result. 00:09:20.530 23:15:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:20.530 test case2: host connect to nvmf target in multiple paths 00:09:20.530 23:15:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:20.530 23:15:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.530 23:15:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:20.530 [2024-07-25 23:15:18.020505] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:20.530 23:15:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.530 23:15:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.099 23:15:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:21.666 23:15:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:21.666 23:15:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:21.666 23:15:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.666 23:15:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:21.666 23:15:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:23.572 23:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:23.572 23:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:23.572 23:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:23.572 23:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:23.572 23:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:23.572 23:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:23.572 23:15:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:23.572 [global] 00:09:23.572 thread=1 00:09:23.572 invalidate=1 00:09:23.572 rw=write 00:09:23.572 time_based=1 00:09:23.572 runtime=1 00:09:23.572 ioengine=libaio 00:09:23.572 direct=1 00:09:23.572 bs=4096 00:09:23.572 iodepth=1 00:09:23.572 norandommap=0 00:09:23.572 numjobs=1 00:09:23.572 00:09:23.572 verify_dump=1 00:09:23.572 verify_backlog=512 00:09:23.572 verify_state_save=0 00:09:23.572 do_verify=1 00:09:23.572 verify=crc32c-intel 00:09:23.572 [job0] 00:09:23.572 filename=/dev/nvme0n1 00:09:23.572 Could not set queue depth (nvme0n1) 00:09:23.831 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.831 fio-3.35 00:09:23.831 Starting 1 thread 00:09:25.264 00:09:25.264 job0: (groupid=0, jobs=1): err= 0: pid=1301485: Thu Jul 25 23:15:22 2024 00:09:25.264 read: IOPS=1589, BW=6358KiB/s (6510kB/s)(6364KiB/1001msec) 00:09:25.264 slat (nsec): min=5900, max=61404, avg=19211.72, stdev=9804.43 00:09:25.264 clat (usec): min=238, max=2615, avg=306.59, stdev=72.55 00:09:25.264 lat (usec): min=244, max=2630, avg=325.81, stdev=75.54 00:09:25.264 clat percentiles (usec): 00:09:25.264 | 1.00th=[ 249], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 273], 00:09:25.264 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 306], 00:09:25.264 | 70.00th=[ 314], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 375], 00:09:25.264 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 570], 99.95th=[ 2606], 00:09:25.264 | 99.99th=[ 2606] 00:09:25.264 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:25.265 slat (usec): min=7, max=40816, avg=49.45, stdev=1094.78 00:09:25.265 clat (usec): min=145, max=356, avg=177.27, stdev=19.30 00:09:25.265 lat (usec): min=157, max=41015, avg=226.72, stdev=1095.78 00:09:25.265 clat percentiles (usec): 00:09:25.265 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:09:25.265 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:09:25.265 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 200], 95.00th=[ 212], 00:09:25.265 | 99.00th=[ 253], 99.50th=[ 281], 99.90th=[ 338], 99.95th=[ 355], 00:09:25.265 | 99.99th=[ 355] 00:09:25.265 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:09:25.265 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:25.265 lat (usec) : 250=56.47%, 500=42.92%, 750=0.58% 00:09:25.265 lat (msec) : 4=0.03% 00:09:25.265 cpu : usr=3.60%, sys=6.10%, ctx=3644, majf=0, minf=2 00:09:25.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.265 issued rwts: total=1591,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.265 00:09:25.265 Run status group 0 (all jobs): 00:09:25.265 READ: bw=6358KiB/s (6510kB/s), 6358KiB/s-6358KiB/s (6510kB/s-6510kB/s), io=6364KiB (6517kB), run=1001-1001msec 00:09:25.265 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:09:25.265 00:09:25.265 Disk stats (read/write): 00:09:25.265 nvme0n1: ios=1590/1574, merge=0/0, ticks=667/277, in_queue=944, util=99.80% 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:25.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:25.265 rmmod nvme_tcp 00:09:25.265 rmmod nvme_fabrics 00:09:25.265 rmmod nvme_keyring 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1300848 ']' 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1300848 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1300848 ']' 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1300848 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1300848 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1300848' 00:09:25.265 killing process with pid 1300848 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1300848 00:09:25.265 23:15:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1300848 00:09:25.525 23:15:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:25.525 23:15:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:25.525 23:15:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:25.525 23:15:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:25.525 23:15:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:25.525 23:15:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.525 23:15:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.525 23:15:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.430 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:27.430 00:09:27.430 real 0m9.779s 00:09:27.430 user 0m21.874s 00:09:27.430 sys 0m2.384s 00:09:27.430 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.430 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.430 ************************************ 00:09:27.430 END TEST nvmf_nmic 00:09:27.430 ************************************ 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 ************************************ 00:09:27.688 START TEST nvmf_fio_target 00:09:27.688 ************************************ 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:27.688 * Looking for test storage... 00:09:27.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:27.688 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:09:27.689 23:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:29.594 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:29.594 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:29.594 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:29.594 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:29.594 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:29.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:09:29.595 00:09:29.595 --- 10.0.0.2 ping statistics --- 00:09:29.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.595 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:09:29.595 00:09:29.595 --- 10.0.0.1 ping statistics --- 00:09:29.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.595 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1303556 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1303556 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1303556 ']' 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.595 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.852 [2024-07-25 23:15:27.341900] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:29.852 [2024-07-25 23:15:27.341981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.852 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.852 [2024-07-25 23:15:27.379824] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:29.852 [2024-07-25 23:15:27.411831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.852 [2024-07-25 23:15:27.504067] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.852 [2024-07-25 23:15:27.504117] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.852 [2024-07-25 23:15:27.504133] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.852 [2024-07-25 23:15:27.504147] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.852 [2024-07-25 23:15:27.504159] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.852 [2024-07-25 23:15:27.504240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.852 [2024-07-25 23:15:27.504297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.852 [2024-07-25 23:15:27.504354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.852 [2024-07-25 23:15:27.504354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.109 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.109 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:30.109 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:30.109 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:30.109 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.109 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.109 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:30.368 [2024-07-25 23:15:27.873884] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.368 23:15:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:30.625 23:15:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:30.625 23:15:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:30.882 23:15:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:30.882 23:15:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:31.140 23:15:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:31.140 23:15:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:31.397 23:15:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:31.397 23:15:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:31.654 23:15:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:31.912 23:15:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:31.912 23:15:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:32.169 23:15:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:32.169 23:15:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:32.426 23:15:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:32.426 23:15:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:32.684 23:15:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:32.942 23:15:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:32.942 23:15:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:33.199 23:15:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:33.199 23:15:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:33.456 23:15:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.714 [2024-07-25 23:15:31.311716] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.714 23:15:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:33.971 23:15:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:34.230 23:15:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:34.796 23:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:34.796 23:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:34.796 23:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:34.796 23:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:34.796 23:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:34.796 23:15:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:37.327 23:15:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:37.327 23:15:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:37.327 23:15:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:37.327 23:15:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:37.327 23:15:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:37.327 23:15:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:37.327 23:15:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:37.327 [global] 00:09:37.327 thread=1 00:09:37.327 invalidate=1 00:09:37.327 rw=write 00:09:37.327 time_based=1 00:09:37.327 runtime=1 00:09:37.327 ioengine=libaio 00:09:37.327 direct=1 00:09:37.327 bs=4096 00:09:37.327 iodepth=1 00:09:37.327 norandommap=0 00:09:37.327 numjobs=1 00:09:37.327 00:09:37.327 verify_dump=1 00:09:37.327 verify_backlog=512 00:09:37.327 verify_state_save=0 00:09:37.327 do_verify=1 00:09:37.327 verify=crc32c-intel 00:09:37.327 [job0] 00:09:37.327 filename=/dev/nvme0n1 00:09:37.327 [job1] 00:09:37.327 filename=/dev/nvme0n2 00:09:37.327 [job2] 00:09:37.327 filename=/dev/nvme0n3 00:09:37.327 [job3] 00:09:37.327 filename=/dev/nvme0n4 00:09:37.327 Could not set queue depth (nvme0n1) 00:09:37.327 Could not set queue depth (nvme0n2) 00:09:37.327 Could not set queue depth (nvme0n3) 00:09:37.327 Could not set queue depth (nvme0n4) 00:09:37.327 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:37.327 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:37.327 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:37.327 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:37.327 fio-3.35 00:09:37.327 Starting 4 threads 00:09:38.258 00:09:38.258 job0: (groupid=0, jobs=1): err= 0: pid=1304529: Thu Jul 25 23:15:35 2024 00:09:38.258 read: IOPS=22, BW=89.1KiB/s (91.3kB/s)(92.0KiB/1032msec) 00:09:38.258 slat (nsec): min=8865, max=33893, avg=21958.83, stdev=8750.43 00:09:38.258 clat (usec): min=234, max=42006, avg=39530.34, stdev=8580.26 00:09:38.258 lat (usec): min=249, max=42023, avg=39552.30, stdev=8581.93 00:09:38.258 clat percentiles (usec): 00:09:38.258 | 1.00th=[ 235], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:38.258 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:38.258 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:38.258 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:38.258 | 99.99th=[42206] 00:09:38.258 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:09:38.258 slat (nsec): min=7786, max=48223, avg=18315.01, stdev=6040.10 00:09:38.258 clat (usec): min=166, max=874, avg=215.24, stdev=46.08 00:09:38.258 lat (usec): min=182, max=882, avg=233.56, stdev=45.86 00:09:38.258 clat percentiles (usec): 00:09:38.258 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:09:38.258 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:09:38.258 | 70.00th=[ 219], 80.00th=[ 231], 90.00th=[ 247], 95.00th=[ 258], 00:09:38.258 | 99.00th=[ 302], 99.50th=[ 529], 99.90th=[ 873], 99.95th=[ 873], 00:09:38.258 | 99.99th=[ 873] 00:09:38.258 bw ( KiB/s): min= 4096, max= 4096, per=23.16%, avg=4096.00, stdev= 0.00, samples=1 00:09:38.258 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:38.258 lat (usec) : 250=89.35%, 500=5.98%, 750=0.37%, 1000=0.19% 00:09:38.258 lat (msec) : 50=4.11% 00:09:38.258 cpu : usr=0.97%, sys=0.97%, ctx=535, majf=0, minf=1 00:09:38.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:38.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.258 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:38.258 job1: (groupid=0, jobs=1): err= 0: pid=1304555: Thu Jul 25 23:15:35 2024 00:09:38.258 read: IOPS=1736, BW=6945KiB/s (7112kB/s)(6952KiB/1001msec) 00:09:38.258 slat (nsec): min=6766, max=44168, avg=13456.95, stdev=5548.02 00:09:38.258 clat (usec): min=219, max=987, avg=302.00, stdev=71.89 00:09:38.258 lat (usec): min=226, max=1013, avg=315.45, stdev=74.32 00:09:38.258 clat percentiles (usec): 00:09:38.258 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 251], 00:09:38.258 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 297], 00:09:38.258 | 70.00th=[ 310], 80.00th=[ 326], 90.00th=[ 412], 95.00th=[ 478], 00:09:38.258 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 693], 99.95th=[ 988], 00:09:38.258 | 99.99th=[ 988] 00:09:38.258 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:38.258 slat (usec): min=8, max=1476, avg=16.76, stdev=33.61 00:09:38.258 clat (usec): min=149, max=403, avg=195.86, stdev=32.22 00:09:38.258 lat (usec): min=159, max=1735, avg=212.62, stdev=50.30 00:09:38.258 clat percentiles (usec): 00:09:38.258 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:09:38.258 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 190], 60.00th=[ 198], 00:09:38.258 | 70.00th=[ 204], 80.00th=[ 217], 90.00th=[ 235], 95.00th=[ 253], 00:09:38.258 | 99.00th=[ 314], 99.50th=[ 359], 99.90th=[ 392], 99.95th=[ 396], 00:09:38.258 | 99.99th=[ 404] 00:09:38.258 bw ( KiB/s): min= 8192, max= 8192, per=46.31%, avg=8192.00, stdev= 0.00, samples=1 00:09:38.258 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:38.258 lat (usec) : 250=60.43%, 500=38.54%, 750=1.00%, 1000=0.03% 00:09:38.258 cpu : usr=4.50%, sys=7.40%, ctx=3790, majf=0, minf=1 00:09:38.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:38.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.258 issued rwts: total=1738,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:38.258 job2: (groupid=0, jobs=1): err= 0: pid=1304611: Thu Jul 25 23:15:35 2024 00:09:38.258 read: IOPS=991, BW=3965KiB/s (4061kB/s)(4132KiB/1042msec) 00:09:38.258 slat (nsec): min=6916, max=44188, avg=12076.06, stdev=5242.09 00:09:38.258 clat (usec): min=256, max=41965, avg=651.80, stdev=3786.86 00:09:38.258 lat (usec): min=264, max=41982, avg=663.88, stdev=3788.14 00:09:38.258 clat percentiles (usec): 00:09:38.258 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:09:38.258 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 302], 00:09:38.258 | 70.00th=[ 310], 80.00th=[ 314], 90.00th=[ 322], 95.00th=[ 330], 00:09:38.258 | 99.00th=[ 351], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:09:38.258 | 99.99th=[42206] 00:09:38.258 write: IOPS=1474, BW=5896KiB/s (6038kB/s)(6144KiB/1042msec); 0 zone resets 00:09:38.258 slat (nsec): min=7749, max=57917, avg=15019.42, stdev=6767.58 00:09:38.258 clat (usec): min=166, max=537, avg=210.11, stdev=26.87 00:09:38.258 lat (usec): min=174, max=548, avg=225.13, stdev=31.75 00:09:38.258 clat percentiles (usec): 00:09:38.259 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 184], 00:09:38.259 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 208], 60.00th=[ 219], 00:09:38.259 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 253], 00:09:38.259 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 293], 99.95th=[ 537], 00:09:38.259 | 99.99th=[ 537] 00:09:38.259 bw ( KiB/s): min= 4096, max= 8192, per=34.73%, avg=6144.00, stdev=2896.31, samples=2 00:09:38.259 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:09:38.259 lat (usec) : 250=55.98%, 500=43.64%, 750=0.04% 00:09:38.259 lat (msec) : 50=0.35% 00:09:38.259 cpu : usr=2.98%, sys=4.13%, ctx=2569, majf=0, minf=2 00:09:38.259 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:38.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.259 issued rwts: total=1033,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.259 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:38.259 job3: (groupid=0, jobs=1): err= 0: pid=1304631: Thu Jul 25 23:15:35 2024 00:09:38.259 read: IOPS=22, BW=89.0KiB/s (91.1kB/s)(92.0KiB/1034msec) 00:09:38.259 slat (nsec): min=14718, max=33719, avg=22183.04, stdev=8036.09 00:09:38.259 clat (usec): min=412, max=41913, avg=39209.85, stdev=8460.96 00:09:38.259 lat (usec): min=434, max=41945, avg=39232.03, stdev=8461.08 00:09:38.259 clat percentiles (usec): 00:09:38.259 | 1.00th=[ 412], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:38.259 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:38.259 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:38.259 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:38.259 | 99.99th=[41681] 00:09:38.259 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:09:38.259 slat (nsec): min=6576, max=57728, avg=16998.05, stdev=7630.49 00:09:38.259 clat (usec): min=173, max=430, avg=235.79, stdev=40.02 00:09:38.259 lat (usec): min=191, max=487, avg=252.78, stdev=42.06 00:09:38.259 clat percentiles (usec): 00:09:38.259 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:09:38.259 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 233], 00:09:38.259 | 70.00th=[ 241], 80.00th=[ 253], 90.00th=[ 281], 95.00th=[ 330], 00:09:38.259 | 99.00th=[ 383], 99.50th=[ 392], 99.90th=[ 433], 99.95th=[ 433], 00:09:38.259 | 99.99th=[ 433] 00:09:38.259 bw ( KiB/s): min= 4096, max= 4096, per=23.16%, avg=4096.00, stdev= 0.00, samples=1 00:09:38.259 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:38.259 lat (usec) : 250=74.21%, 500=21.68% 00:09:38.259 lat (msec) : 50=4.11% 00:09:38.259 cpu : usr=0.48%, sys=0.77%, ctx=535, majf=0, minf=1 00:09:38.259 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:38.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.259 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.259 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:38.259 00:09:38.259 Run status group 0 (all jobs): 00:09:38.259 READ: bw=10.6MiB/s (11.1MB/s), 89.0KiB/s-6945KiB/s (91.1kB/s-7112kB/s), io=11.0MiB (11.5MB), run=1001-1042msec 00:09:38.259 WRITE: bw=17.3MiB/s (18.1MB/s), 1981KiB/s-8184KiB/s (2028kB/s-8380kB/s), io=18.0MiB (18.9MB), run=1001-1042msec 00:09:38.259 00:09:38.259 Disk stats (read/write): 00:09:38.259 nvme0n1: ios=67/512, merge=0/0, ticks=688/102, in_queue=790, util=82.46% 00:09:38.259 nvme0n2: ios=1556/1536, merge=0/0, ticks=667/278, in_queue=945, util=98.14% 00:09:38.259 nvme0n3: ios=1027/1536, merge=0/0, ticks=418/313, in_queue=731, util=87.83% 00:09:38.259 nvme0n4: ios=17/512, merge=0/0, ticks=657/116, in_queue=773, util=89.18% 00:09:38.259 23:15:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:38.516 [global] 00:09:38.516 thread=1 00:09:38.516 invalidate=1 00:09:38.516 rw=randwrite 00:09:38.516 time_based=1 00:09:38.516 runtime=1 00:09:38.516 ioengine=libaio 00:09:38.516 direct=1 00:09:38.516 bs=4096 00:09:38.516 iodepth=1 00:09:38.516 norandommap=0 00:09:38.516 numjobs=1 00:09:38.516 00:09:38.516 verify_dump=1 00:09:38.516 verify_backlog=512 00:09:38.516 verify_state_save=0 00:09:38.516 do_verify=1 00:09:38.516 verify=crc32c-intel 00:09:38.516 [job0] 00:09:38.516 filename=/dev/nvme0n1 00:09:38.516 [job1] 00:09:38.516 filename=/dev/nvme0n2 00:09:38.516 [job2] 00:09:38.516 filename=/dev/nvme0n3 00:09:38.516 [job3] 00:09:38.516 filename=/dev/nvme0n4 00:09:38.516 Could not set queue depth (nvme0n1) 00:09:38.516 Could not set queue depth (nvme0n2) 00:09:38.516 Could not set queue depth (nvme0n3) 00:09:38.516 Could not set queue depth (nvme0n4) 00:09:38.516 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.516 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.516 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.516 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.516 fio-3.35 00:09:38.516 Starting 4 threads 00:09:39.887 00:09:39.887 job0: (groupid=0, jobs=1): err= 0: pid=1304868: Thu Jul 25 23:15:37 2024 00:09:39.887 read: IOPS=20, BW=83.4KiB/s (85.4kB/s)(84.0KiB/1007msec) 00:09:39.887 slat (nsec): min=8310, max=25674, avg=14228.67, stdev=2905.46 00:09:39.887 clat (usec): min=40946, max=41298, avg=40994.82, stdev=70.98 00:09:39.887 lat (usec): min=40960, max=41306, avg=41009.04, stdev=69.66 00:09:39.887 clat percentiles (usec): 00:09:39.887 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:39.887 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:39.887 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:39.887 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:39.887 | 99.99th=[41157] 00:09:39.887 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:09:39.887 slat (nsec): min=7971, max=39596, avg=10847.39, stdev=3394.02 00:09:39.887 clat (usec): min=165, max=466, avg=270.16, stdev=54.13 00:09:39.887 lat (usec): min=174, max=477, avg=281.01, stdev=54.89 00:09:39.887 clat percentiles (usec): 00:09:39.887 | 1.00th=[ 182], 5.00th=[ 200], 10.00th=[ 219], 20.00th=[ 229], 00:09:39.887 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 269], 00:09:39.887 | 70.00th=[ 285], 80.00th=[ 322], 90.00th=[ 347], 95.00th=[ 371], 00:09:39.887 | 99.00th=[ 433], 99.50th=[ 449], 99.90th=[ 465], 99.95th=[ 465], 00:09:39.887 | 99.99th=[ 465] 00:09:39.887 bw ( KiB/s): min= 4087, max= 4087, per=29.28%, avg=4087.00, stdev= 0.00, samples=1 00:09:39.887 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:39.887 lat (usec) : 250=43.34%, 500=52.72% 00:09:39.887 lat (msec) : 50=3.94% 00:09:39.887 cpu : usr=0.40%, sys=0.80%, ctx=534, majf=0, minf=1 00:09:39.887 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.887 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.887 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.887 job1: (groupid=0, jobs=1): err= 0: pid=1304869: Thu Jul 25 23:15:37 2024 00:09:39.887 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:09:39.887 slat (nsec): min=13076, max=33796, avg=20646.59, stdev=8357.35 00:09:39.887 clat (usec): min=40886, max=41332, avg=41005.64, stdev=102.42 00:09:39.887 lat (usec): min=40902, max=41348, avg=41026.28, stdev=102.14 00:09:39.887 clat percentiles (usec): 00:09:39.887 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:39.887 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:39.887 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:39.887 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:39.887 | 99.99th=[41157] 00:09:39.887 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:09:39.887 slat (nsec): min=8149, max=35483, avg=16432.24, stdev=5819.41 00:09:39.887 clat (usec): min=168, max=246, avg=199.03, stdev=11.31 00:09:39.887 lat (usec): min=183, max=264, avg=215.46, stdev=12.58 00:09:39.887 clat percentiles (usec): 00:09:39.887 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:09:39.887 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 202], 00:09:39.887 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 215], 95.00th=[ 219], 00:09:39.887 | 99.00th=[ 229], 99.50th=[ 233], 99.90th=[ 247], 99.95th=[ 247], 00:09:39.887 | 99.99th=[ 247] 00:09:39.887 bw ( KiB/s): min= 4087, max= 4087, per=29.28%, avg=4087.00, stdev= 0.00, samples=1 00:09:39.887 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:39.887 lat (usec) : 250=95.88% 00:09:39.887 lat (msec) : 50=4.12% 00:09:39.887 cpu : usr=0.20%, sys=0.99%, ctx=535, majf=0, minf=2 00:09:39.887 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.887 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.887 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.887 job2: (groupid=0, jobs=1): err= 0: pid=1304870: Thu Jul 25 23:15:37 2024 00:09:39.887 read: IOPS=1947, BW=7788KiB/s (7975kB/s)(7796KiB/1001msec) 00:09:39.887 slat (nsec): min=5309, max=59805, avg=11730.33, stdev=5476.44 00:09:39.887 clat (usec): min=215, max=714, avg=262.30, stdev=26.70 00:09:39.887 lat (usec): min=220, max=731, avg=274.03, stdev=30.24 00:09:39.887 clat percentiles (usec): 00:09:39.887 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:09:39.887 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 269], 00:09:39.887 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 302], 00:09:39.887 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 594], 99.95th=[ 717], 00:09:39.887 | 99.99th=[ 717] 00:09:39.887 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:39.887 slat (nsec): min=6869, max=70292, avg=14613.86, stdev=8153.77 00:09:39.887 clat (usec): min=149, max=399, avg=205.42, stdev=48.40 00:09:39.887 lat (usec): min=156, max=437, avg=220.04, stdev=53.35 00:09:39.887 clat percentiles (usec): 00:09:39.887 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:09:39.887 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 190], 60.00th=[ 198], 00:09:39.887 | 70.00th=[ 210], 80.00th=[ 235], 90.00th=[ 281], 95.00th=[ 306], 00:09:39.887 | 99.00th=[ 371], 99.50th=[ 383], 99.90th=[ 396], 99.95th=[ 396], 00:09:39.887 | 99.99th=[ 400] 00:09:39.887 bw ( KiB/s): min= 8175, max= 8175, per=58.56%, avg=8175.00, stdev= 0.00, samples=1 00:09:39.887 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:09:39.887 lat (usec) : 250=61.40%, 500=38.55%, 750=0.05% 00:09:39.887 cpu : usr=2.90%, sys=8.30%, ctx=3998, majf=0, minf=1 00:09:39.887 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.887 issued rwts: total=1949,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.887 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.887 job3: (groupid=0, jobs=1): err= 0: pid=1304871: Thu Jul 25 23:15:37 2024 00:09:39.887 read: IOPS=21, BW=85.7KiB/s (87.7kB/s)(88.0KiB/1027msec) 00:09:39.887 slat (nsec): min=7298, max=36509, avg=21062.32, stdev=10117.10 00:09:39.887 clat (usec): min=291, max=44989, avg=39296.56, stdev=8754.26 00:09:39.887 lat (usec): min=304, max=45008, avg=39317.62, stdev=8756.08 00:09:39.887 clat percentiles (usec): 00:09:39.887 | 1.00th=[ 293], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:39.887 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:39.887 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:39.887 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:09:39.887 | 99.99th=[44827] 00:09:39.887 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:09:39.887 slat (nsec): min=6782, max=76257, avg=19462.04, stdev=12120.77 00:09:39.888 clat (usec): min=187, max=500, avg=291.12, stdev=74.03 00:09:39.888 lat (usec): min=195, max=538, avg=310.59, stdev=81.88 00:09:39.888 clat percentiles (usec): 00:09:39.888 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 212], 20.00th=[ 225], 00:09:39.888 | 30.00th=[ 233], 40.00th=[ 251], 50.00th=[ 273], 60.00th=[ 293], 00:09:39.888 | 70.00th=[ 334], 80.00th=[ 363], 90.00th=[ 408], 95.00th=[ 433], 00:09:39.888 | 99.00th=[ 465], 99.50th=[ 482], 99.90th=[ 502], 99.95th=[ 502], 00:09:39.888 | 99.99th=[ 502] 00:09:39.888 bw ( KiB/s): min= 4087, max= 4087, per=29.28%, avg=4087.00, stdev= 0.00, samples=1 00:09:39.888 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:39.888 lat (usec) : 250=38.39%, 500=57.49%, 750=0.19% 00:09:39.888 lat (msec) : 50=3.93% 00:09:39.888 cpu : usr=1.27%, sys=0.68%, ctx=534, majf=0, minf=1 00:09:39.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.888 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.888 00:09:39.888 Run status group 0 (all jobs): 00:09:39.888 READ: bw=7844KiB/s (8032kB/s), 83.4KiB/s-7788KiB/s (85.4kB/s-7975kB/s), io=8056KiB (8249kB), run=1001-1027msec 00:09:39.888 WRITE: bw=13.6MiB/s (14.3MB/s), 1994KiB/s-8184KiB/s (2042kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1027msec 00:09:39.888 00:09:39.888 Disk stats (read/write): 00:09:39.888 nvme0n1: ios=57/512, merge=0/0, ticks=1664/138, in_queue=1802, util=97.09% 00:09:39.888 nvme0n2: ios=42/512, merge=0/0, ticks=1731/98, in_queue=1829, util=97.97% 00:09:39.888 nvme0n3: ios=1564/1944, merge=0/0, ticks=1366/380, in_queue=1746, util=98.02% 00:09:39.888 nvme0n4: ios=17/512, merge=0/0, ticks=656/140, in_queue=796, util=89.59% 00:09:39.888 23:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:39.888 [global] 00:09:39.888 thread=1 00:09:39.888 invalidate=1 00:09:39.888 rw=write 00:09:39.888 time_based=1 00:09:39.888 runtime=1 00:09:39.888 ioengine=libaio 00:09:39.888 direct=1 00:09:39.888 bs=4096 00:09:39.888 iodepth=128 00:09:39.888 norandommap=0 00:09:39.888 numjobs=1 00:09:39.888 00:09:39.888 verify_dump=1 00:09:39.888 verify_backlog=512 00:09:39.888 verify_state_save=0 00:09:39.888 do_verify=1 00:09:39.888 verify=crc32c-intel 00:09:39.888 [job0] 00:09:39.888 filename=/dev/nvme0n1 00:09:39.888 [job1] 00:09:39.888 filename=/dev/nvme0n2 00:09:39.888 [job2] 00:09:39.888 filename=/dev/nvme0n3 00:09:39.888 [job3] 00:09:39.888 filename=/dev/nvme0n4 00:09:39.888 Could not set queue depth (nvme0n1) 00:09:39.888 Could not set queue depth (nvme0n2) 00:09:39.888 Could not set queue depth (nvme0n3) 00:09:39.888 Could not set queue depth (nvme0n4) 00:09:40.145 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:40.145 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:40.145 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:40.145 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:40.145 fio-3.35 00:09:40.145 Starting 4 threads 00:09:41.518 00:09:41.518 job0: (groupid=0, jobs=1): err= 0: pid=1305103: Thu Jul 25 23:15:38 2024 00:09:41.518 read: IOPS=3503, BW=13.7MiB/s (14.3MB/s)(13.8MiB/1005msec) 00:09:41.518 slat (usec): min=2, max=22561, avg=139.53, stdev=987.83 00:09:41.518 clat (usec): min=1791, max=49208, avg=17769.21, stdev=6097.03 00:09:41.518 lat (usec): min=2670, max=49227, avg=17908.74, stdev=6156.23 00:09:41.518 clat percentiles (usec): 00:09:41.518 | 1.00th=[ 5604], 5.00th=[12125], 10.00th=[12911], 20.00th=[13304], 00:09:41.518 | 30.00th=[13566], 40.00th=[14353], 50.00th=[15926], 60.00th=[18482], 00:09:41.518 | 70.00th=[20317], 80.00th=[21890], 90.00th=[24249], 95.00th=[30802], 00:09:41.518 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[45876], 00:09:41.518 | 99.99th=[49021] 00:09:41.518 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:09:41.518 slat (usec): min=4, max=20438, avg=128.84, stdev=818.00 00:09:41.518 clat (usec): min=836, max=43658, avg=18115.32, stdev=7445.50 00:09:41.518 lat (usec): min=844, max=43678, avg=18244.16, stdev=7522.40 00:09:41.518 clat percentiles (usec): 00:09:41.518 | 1.00th=[ 3851], 5.00th=[ 9110], 10.00th=[11994], 20.00th=[12911], 00:09:41.518 | 30.00th=[13435], 40.00th=[14484], 50.00th=[16450], 60.00th=[19006], 00:09:41.518 | 70.00th=[21627], 80.00th=[22152], 90.00th=[25297], 95.00th=[35390], 00:09:41.518 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:09:41.518 | 99.99th=[43779] 00:09:41.518 bw ( KiB/s): min=12288, max=16384, per=22.88%, avg=14336.00, stdev=2896.31, samples=2 00:09:41.518 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:09:41.518 lat (usec) : 1000=0.03% 00:09:41.518 lat (msec) : 2=0.01%, 4=0.89%, 10=2.91%, 20=60.94%, 50=35.21% 00:09:41.518 cpu : usr=4.28%, sys=8.27%, ctx=321, majf=0, minf=1 00:09:41.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:41.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:41.518 issued rwts: total=3521,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:41.518 job1: (groupid=0, jobs=1): err= 0: pid=1305104: Thu Jul 25 23:15:38 2024 00:09:41.518 read: IOPS=2808, BW=11.0MiB/s (11.5MB/s)(11.5MiB/1046msec) 00:09:41.518 slat (usec): min=2, max=17237, avg=147.87, stdev=993.99 00:09:41.518 clat (usec): min=4749, max=68463, avg=19867.16, stdev=11650.97 00:09:41.518 lat (usec): min=4753, max=77811, avg=20015.03, stdev=11722.33 00:09:41.518 clat percentiles (usec): 00:09:41.518 | 1.00th=[ 5997], 5.00th=[ 7373], 10.00th=[ 8717], 20.00th=[11338], 00:09:41.518 | 30.00th=[13304], 40.00th=[15664], 50.00th=[18482], 60.00th=[20841], 00:09:41.518 | 70.00th=[21890], 80.00th=[23725], 90.00th=[30016], 95.00th=[44827], 00:09:41.518 | 99.00th=[67634], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:09:41.518 | 99.99th=[68682] 00:09:41.518 write: IOPS=2936, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1046msec); 0 zone resets 00:09:41.518 slat (usec): min=4, max=25653, avg=172.45, stdev=854.72 00:09:41.518 clat (usec): min=1613, max=65839, avg=24094.51, stdev=17143.08 00:09:41.518 lat (usec): min=1618, max=65862, avg=24266.95, stdev=17259.79 00:09:41.518 clat percentiles (usec): 00:09:41.518 | 1.00th=[ 1696], 5.00th=[ 2966], 10.00th=[ 4080], 20.00th=[ 8291], 00:09:41.518 | 30.00th=[12780], 40.00th=[17695], 50.00th=[21627], 60.00th=[22414], 00:09:41.518 | 70.00th=[27657], 80.00th=[41157], 90.00th=[52691], 95.00th=[59507], 00:09:41.518 | 99.00th=[64750], 99.50th=[65274], 99.90th=[65799], 99.95th=[65799], 00:09:41.518 | 99.99th=[65799] 00:09:41.518 bw ( KiB/s): min=10376, max=14200, per=19.61%, avg=12288.00, stdev=2703.98, samples=2 00:09:41.518 iops : min= 2594, max= 3550, avg=3072.00, stdev=675.99, samples=2 00:09:41.518 lat (msec) : 2=0.80%, 4=4.29%, 10=13.88%, 20=31.00%, 50=41.65% 00:09:41.518 lat (msec) : 100=8.39% 00:09:41.518 cpu : usr=4.78%, sys=5.17%, ctx=329, majf=0, minf=1 00:09:41.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:41.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:41.518 issued rwts: total=2938,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:41.518 job2: (groupid=0, jobs=1): err= 0: pid=1305105: Thu Jul 25 23:15:38 2024 00:09:41.518 read: IOPS=4684, BW=18.3MiB/s (19.2MB/s)(18.3MiB/1002msec) 00:09:41.518 slat (usec): min=2, max=27690, avg=101.14, stdev=663.07 00:09:41.518 clat (usec): min=545, max=43503, avg=13087.37, stdev=3966.11 00:09:41.518 lat (usec): min=1713, max=43518, avg=13188.51, stdev=3988.13 00:09:41.518 clat percentiles (usec): 00:09:41.518 | 1.00th=[ 5997], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[11469], 00:09:41.518 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12649], 60.00th=[13042], 00:09:41.518 | 70.00th=[13435], 80.00th=[13829], 90.00th=[15008], 95.00th=[17171], 00:09:41.518 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:09:41.518 | 99.99th=[43254] 00:09:41.518 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:09:41.518 slat (usec): min=4, max=15714, avg=96.66, stdev=616.47 00:09:41.518 clat (usec): min=5956, max=45042, avg=12634.03, stdev=3880.64 00:09:41.518 lat (usec): min=5961, max=45047, avg=12730.69, stdev=3895.85 00:09:41.518 clat percentiles (usec): 00:09:41.518 | 1.00th=[ 6128], 5.00th=[ 8717], 10.00th=[10421], 20.00th=[11207], 00:09:41.518 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:09:41.518 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13960], 95.00th=[16909], 00:09:41.518 | 99.00th=[31589], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:09:41.518 | 99.99th=[44827] 00:09:41.518 bw ( KiB/s): min=20016, max=20616, per=32.43%, avg=20316.00, stdev=424.26, samples=2 00:09:41.518 iops : min= 5004, max= 5154, avg=5079.00, stdev=106.07, samples=2 00:09:41.518 lat (usec) : 750=0.01% 00:09:41.518 lat (msec) : 2=0.09%, 10=8.22%, 20=88.62%, 50=3.06% 00:09:41.518 cpu : usr=4.40%, sys=5.69%, ctx=414, majf=0, minf=1 00:09:41.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:41.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:41.519 issued rwts: total=4694,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:41.519 job3: (groupid=0, jobs=1): err= 0: pid=1305106: Thu Jul 25 23:15:38 2024 00:09:41.519 read: IOPS=4138, BW=16.2MiB/s (16.9MB/s)(16.2MiB/1005msec) 00:09:41.519 slat (usec): min=3, max=38482, avg=112.77, stdev=928.61 00:09:41.519 clat (usec): min=1049, max=64809, avg=14744.90, stdev=7433.96 00:09:41.519 lat (usec): min=6513, max=80459, avg=14857.67, stdev=7512.77 00:09:41.519 clat percentiles (usec): 00:09:41.519 | 1.00th=[ 8717], 5.00th=[10159], 10.00th=[11338], 20.00th=[12256], 00:09:41.519 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13435], 00:09:41.519 | 70.00th=[13829], 80.00th=[14615], 90.00th=[17171], 95.00th=[22938], 00:09:41.519 | 99.00th=[57934], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:09:41.519 | 99.99th=[64750] 00:09:41.519 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:09:41.519 slat (usec): min=4, max=21685, avg=103.61, stdev=713.23 00:09:41.519 clat (usec): min=797, max=64623, avg=14359.34, stdev=6063.04 00:09:41.519 lat (usec): min=805, max=64629, avg=14462.95, stdev=6098.65 00:09:41.519 clat percentiles (usec): 00:09:41.519 | 1.00th=[ 7635], 5.00th=[11076], 10.00th=[11863], 20.00th=[12256], 00:09:41.519 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13173], 60.00th=[13435], 00:09:41.519 | 70.00th=[13566], 80.00th=[14222], 90.00th=[15533], 95.00th=[19530], 00:09:41.519 | 99.00th=[47449], 99.50th=[47449], 99.90th=[56886], 99.95th=[56886], 00:09:41.519 | 99.99th=[64750] 00:09:41.519 bw ( KiB/s): min=16384, max=19960, per=29.00%, avg=18172.00, stdev=2528.61, samples=2 00:09:41.519 iops : min= 4096, max= 4990, avg=4543.00, stdev=632.15, samples=2 00:09:41.519 lat (usec) : 1000=0.02% 00:09:41.519 lat (msec) : 2=0.01%, 10=3.40%, 20=91.51%, 50=4.11%, 100=0.95% 00:09:41.519 cpu : usr=6.37%, sys=9.66%, ctx=354, majf=0, minf=1 00:09:41.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:41.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:41.519 issued rwts: total=4159,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:41.519 00:09:41.519 Run status group 0 (all jobs): 00:09:41.519 READ: bw=57.2MiB/s (60.0MB/s), 11.0MiB/s-18.3MiB/s (11.5MB/s-19.2MB/s), io=59.8MiB (62.7MB), run=1002-1046msec 00:09:41.519 WRITE: bw=61.2MiB/s (64.2MB/s), 11.5MiB/s-20.0MiB/s (12.0MB/s-20.9MB/s), io=64.0MiB (67.1MB), run=1002-1046msec 00:09:41.519 00:09:41.519 Disk stats (read/write): 00:09:41.519 nvme0n1: ios=3072/3072, merge=0/0, ticks=31457/29687, in_queue=61144, util=98.60% 00:09:41.519 nvme0n2: ios=2599/2560, merge=0/0, ticks=25084/37128, in_queue=62212, util=97.76% 00:09:41.519 nvme0n3: ios=4153/4103, merge=0/0, ticks=22105/20265, in_queue=42370, util=95.17% 00:09:41.519 nvme0n4: ios=3640/3663, merge=0/0, ticks=26904/25179, in_queue=52083, util=97.25% 00:09:41.519 23:15:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:41.519 [global] 00:09:41.519 thread=1 00:09:41.519 invalidate=1 00:09:41.519 rw=randwrite 00:09:41.519 time_based=1 00:09:41.519 runtime=1 00:09:41.519 ioengine=libaio 00:09:41.519 direct=1 00:09:41.519 bs=4096 00:09:41.519 iodepth=128 00:09:41.519 norandommap=0 00:09:41.519 numjobs=1 00:09:41.519 00:09:41.519 verify_dump=1 00:09:41.519 verify_backlog=512 00:09:41.519 verify_state_save=0 00:09:41.519 do_verify=1 00:09:41.519 verify=crc32c-intel 00:09:41.519 [job0] 00:09:41.519 filename=/dev/nvme0n1 00:09:41.519 [job1] 00:09:41.519 filename=/dev/nvme0n2 00:09:41.519 [job2] 00:09:41.519 filename=/dev/nvme0n3 00:09:41.519 [job3] 00:09:41.519 filename=/dev/nvme0n4 00:09:41.519 Could not set queue depth (nvme0n1) 00:09:41.519 Could not set queue depth (nvme0n2) 00:09:41.519 Could not set queue depth (nvme0n3) 00:09:41.519 Could not set queue depth (nvme0n4) 00:09:41.519 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.519 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.519 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.519 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:41.519 fio-3.35 00:09:41.519 Starting 4 threads 00:09:42.891 00:09:42.891 job0: (groupid=0, jobs=1): err= 0: pid=1305334: Thu Jul 25 23:15:40 2024 00:09:42.891 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:09:42.891 slat (usec): min=2, max=17385, avg=120.21, stdev=690.13 00:09:42.891 clat (usec): min=8660, max=50321, avg=15249.78, stdev=6901.72 00:09:42.891 lat (usec): min=8676, max=50331, avg=15370.00, stdev=6947.78 00:09:42.891 clat percentiles (usec): 00:09:42.891 | 1.00th=[ 9372], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11338], 00:09:42.891 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12518], 60.00th=[13304], 00:09:42.891 | 70.00th=[14746], 80.00th=[16909], 90.00th=[25560], 95.00th=[30016], 00:09:42.891 | 99.00th=[40633], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:09:42.891 | 99.99th=[50070] 00:09:42.891 write: IOPS=4463, BW=17.4MiB/s (18.3MB/s)(17.5MiB/1006msec); 0 zone resets 00:09:42.891 slat (usec): min=3, max=13124, avg=102.94, stdev=646.45 00:09:42.891 clat (usec): min=759, max=50219, avg=14512.30, stdev=6381.59 00:09:42.891 lat (usec): min=773, max=50230, avg=14615.24, stdev=6420.49 00:09:42.891 clat percentiles (usec): 00:09:42.891 | 1.00th=[ 5997], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10290], 00:09:42.891 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12387], 60.00th=[13173], 00:09:42.891 | 70.00th=[15664], 80.00th=[16909], 90.00th=[22938], 95.00th=[29230], 00:09:42.891 | 99.00th=[39584], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:42.891 | 99.99th=[50070] 00:09:42.892 bw ( KiB/s): min=14416, max=20439, per=28.49%, avg=17427.50, stdev=4258.90, samples=2 00:09:42.892 iops : min= 3604, max= 5109, avg=4356.50, stdev=1064.20, samples=2 00:09:42.892 lat (usec) : 1000=0.03% 00:09:42.892 lat (msec) : 2=0.05%, 4=0.33%, 10=12.21%, 20=71.78%, 50=15.33% 00:09:42.892 lat (msec) : 100=0.28% 00:09:42.892 cpu : usr=6.07%, sys=8.66%, ctx=424, majf=0, minf=13 00:09:42.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:42.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.892 issued rwts: total=4096,4490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.892 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.892 job1: (groupid=0, jobs=1): err= 0: pid=1305335: Thu Jul 25 23:15:40 2024 00:09:42.892 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:09:42.892 slat (usec): min=2, max=17135, avg=120.82, stdev=832.20 00:09:42.892 clat (usec): min=7888, max=47204, avg=15718.56, stdev=5748.31 00:09:42.892 lat (usec): min=7900, max=48501, avg=15839.39, stdev=5823.06 00:09:42.892 clat percentiles (usec): 00:09:42.892 | 1.00th=[ 8848], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[11600], 00:09:42.892 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13960], 60.00th=[15664], 00:09:42.892 | 70.00th=[16909], 80.00th=[19792], 90.00th=[22676], 95.00th=[28181], 00:09:42.892 | 99.00th=[36439], 99.50th=[40109], 99.90th=[46924], 99.95th=[47449], 00:09:42.892 | 99.99th=[47449] 00:09:42.892 write: IOPS=4182, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1006msec); 0 zone resets 00:09:42.892 slat (usec): min=3, max=10087, avg=111.72, stdev=610.60 00:09:42.892 clat (usec): min=1836, max=51864, avg=14880.93, stdev=5688.51 00:09:42.892 lat (usec): min=5847, max=51878, avg=14992.66, stdev=5730.86 00:09:42.892 clat percentiles (usec): 00:09:42.892 | 1.00th=[ 8029], 5.00th=[ 9765], 10.00th=[11338], 20.00th=[12125], 00:09:42.892 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13173], 60.00th=[14222], 00:09:42.892 | 70.00th=[15139], 80.00th=[16057], 90.00th=[20055], 95.00th=[20841], 00:09:42.892 | 99.00th=[45351], 99.50th=[49546], 99.90th=[51119], 99.95th=[51643], 00:09:42.892 | 99.99th=[51643] 00:09:42.892 bw ( KiB/s): min=16384, max=16384, per=26.79%, avg=16384.00, stdev= 0.00, samples=2 00:09:42.892 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:42.892 lat (msec) : 2=0.01%, 10=5.35%, 20=81.08%, 50=13.32%, 100=0.24% 00:09:42.892 cpu : usr=4.88%, sys=6.67%, ctx=458, majf=0, minf=17 00:09:42.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:42.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.892 issued rwts: total=4096,4208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.892 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.892 job2: (groupid=0, jobs=1): err= 0: pid=1305336: Thu Jul 25 23:15:40 2024 00:09:42.892 read: IOPS=3191, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1006msec) 00:09:42.892 slat (usec): min=2, max=13314, avg=149.36, stdev=930.89 00:09:42.892 clat (usec): min=3800, max=36478, avg=18920.59, stdev=5390.76 00:09:42.892 lat (usec): min=3817, max=37415, avg=19069.94, stdev=5462.61 00:09:42.892 clat percentiles (usec): 00:09:42.892 | 1.00th=[ 6063], 5.00th=[10159], 10.00th=[13042], 20.00th=[13960], 00:09:42.892 | 30.00th=[15401], 40.00th=[16909], 50.00th=[19530], 60.00th=[20841], 00:09:42.892 | 70.00th=[21890], 80.00th=[23725], 90.00th=[25035], 95.00th=[27395], 00:09:42.892 | 99.00th=[31065], 99.50th=[34866], 99.90th=[35914], 99.95th=[36439], 00:09:42.892 | 99.99th=[36439] 00:09:42.892 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:09:42.892 slat (usec): min=3, max=24793, avg=133.35, stdev=924.54 00:09:42.892 clat (usec): min=2469, max=70451, avg=18489.61, stdev=9520.58 00:09:42.892 lat (usec): min=2479, max=70479, avg=18622.97, stdev=9586.65 00:09:42.892 clat percentiles (usec): 00:09:42.892 | 1.00th=[ 5800], 5.00th=[10945], 10.00th=[12387], 20.00th=[13304], 00:09:42.892 | 30.00th=[13566], 40.00th=[14353], 50.00th=[15270], 60.00th=[16909], 00:09:42.892 | 70.00th=[20055], 80.00th=[20841], 90.00th=[27657], 95.00th=[41157], 00:09:42.892 | 99.00th=[60556], 99.50th=[65274], 99.90th=[70779], 99.95th=[70779], 00:09:42.892 | 99.99th=[70779] 00:09:42.892 bw ( KiB/s): min=13280, max=15392, per=23.44%, avg=14336.00, stdev=1493.41, samples=2 00:09:42.892 iops : min= 3320, max= 3848, avg=3584.00, stdev=373.35, samples=2 00:09:42.892 lat (msec) : 4=0.34%, 10=4.37%, 20=58.45%, 50=35.66%, 100=1.18% 00:09:42.892 cpu : usr=4.58%, sys=5.37%, ctx=223, majf=0, minf=7 00:09:42.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:42.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.892 issued rwts: total=3211,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.892 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.892 job3: (groupid=0, jobs=1): err= 0: pid=1305337: Thu Jul 25 23:15:40 2024 00:09:42.892 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:09:42.892 slat (usec): min=3, max=16250, avg=137.22, stdev=880.56 00:09:42.892 clat (usec): min=7949, max=45086, avg=18162.05, stdev=5950.28 00:09:42.892 lat (usec): min=7954, max=45430, avg=18299.26, stdev=6019.50 00:09:42.892 clat percentiles (usec): 00:09:42.892 | 1.00th=[ 8586], 5.00th=[11863], 10.00th=[12125], 20.00th=[13304], 00:09:42.892 | 30.00th=[15008], 40.00th=[15795], 50.00th=[16319], 60.00th=[17695], 00:09:42.892 | 70.00th=[20055], 80.00th=[24249], 90.00th=[27132], 95.00th=[27919], 00:09:42.892 | 99.00th=[34341], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:09:42.892 | 99.99th=[44827] 00:09:42.892 write: IOPS=3081, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1006msec); 0 zone resets 00:09:42.892 slat (usec): min=4, max=23500, avg=173.13, stdev=1095.80 00:09:42.892 clat (usec): min=5194, max=60136, avg=22906.26, stdev=11805.03 00:09:42.892 lat (usec): min=7637, max=60156, avg=23079.39, stdev=11890.33 00:09:42.892 clat percentiles (usec): 00:09:42.892 | 1.00th=[ 8094], 5.00th=[11338], 10.00th=[12256], 20.00th=[13435], 00:09:42.892 | 30.00th=[14877], 40.00th=[15401], 50.00th=[17695], 60.00th=[23725], 00:09:42.892 | 70.00th=[27132], 80.00th=[34341], 90.00th=[39060], 95.00th=[44827], 00:09:42.892 | 99.00th=[55313], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:09:42.892 | 99.99th=[60031] 00:09:42.892 bw ( KiB/s): min=12263, max=12288, per=20.07%, avg=12275.50, stdev=17.68, samples=2 00:09:42.892 iops : min= 3065, max= 3072, avg=3068.50, stdev= 4.95, samples=2 00:09:42.892 lat (msec) : 10=3.95%, 20=58.38%, 50=35.87%, 100=1.80% 00:09:42.892 cpu : usr=5.57%, sys=7.66%, ctx=276, majf=0, minf=15 00:09:42.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:42.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.892 issued rwts: total=3072,3100,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.892 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.892 00:09:42.892 Run status group 0 (all jobs): 00:09:42.892 READ: bw=56.2MiB/s (58.9MB/s), 11.9MiB/s-15.9MiB/s (12.5MB/s-16.7MB/s), io=56.5MiB (59.3MB), run=1006-1006msec 00:09:42.892 WRITE: bw=59.7MiB/s (62.6MB/s), 12.0MiB/s-17.4MiB/s (12.6MB/s-18.3MB/s), io=60.1MiB (63.0MB), run=1006-1006msec 00:09:42.892 00:09:42.892 Disk stats (read/write): 00:09:42.892 nvme0n1: ios=3625/3909, merge=0/0, ticks=19020/20751, in_queue=39771, util=86.27% 00:09:42.892 nvme0n2: ios=3273/3584, merge=0/0, ticks=24141/24091, in_queue=48232, util=97.97% 00:09:42.892 nvme0n3: ios=2606/3072, merge=0/0, ticks=24227/25562, in_queue=49789, util=97.70% 00:09:42.892 nvme0n4: ios=2378/2560, merge=0/0, ticks=21419/30898, in_queue=52317, util=90.42% 00:09:42.892 23:15:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:42.892 23:15:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1305493 00:09:42.892 23:15:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:42.892 23:15:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:42.892 [global] 00:09:42.892 thread=1 00:09:42.892 invalidate=1 00:09:42.892 rw=read 00:09:42.892 time_based=1 00:09:42.892 runtime=10 00:09:42.892 ioengine=libaio 00:09:42.892 direct=1 00:09:42.892 bs=4096 00:09:42.892 iodepth=1 00:09:42.892 norandommap=1 00:09:42.892 numjobs=1 00:09:42.892 00:09:42.892 [job0] 00:09:42.892 filename=/dev/nvme0n1 00:09:42.892 [job1] 00:09:42.892 filename=/dev/nvme0n2 00:09:42.892 [job2] 00:09:42.892 filename=/dev/nvme0n3 00:09:42.892 [job3] 00:09:42.892 filename=/dev/nvme0n4 00:09:42.892 Could not set queue depth (nvme0n1) 00:09:42.892 Could not set queue depth (nvme0n2) 00:09:42.892 Could not set queue depth (nvme0n3) 00:09:42.892 Could not set queue depth (nvme0n4) 00:09:42.892 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.893 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.893 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.893 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.893 fio-3.35 00:09:42.893 Starting 4 threads 00:09:46.198 23:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:46.198 23:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:46.198 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=40927232, buflen=4096 00:09:46.198 fio: pid=1305695, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:46.455 23:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:46.455 23:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:46.455 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=5087232, buflen=4096 00:09:46.455 fio: pid=1305694, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:46.713 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=43790336, buflen=4096 00:09:46.713 fio: pid=1305683, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:46.713 23:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:46.713 23:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:46.971 23:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:46.971 23:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:46.971 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=19238912, buflen=4096 00:09:46.971 fio: pid=1305684, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:09:46.971 00:09:46.971 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1305683: Thu Jul 25 23:15:44 2024 00:09:46.971 read: IOPS=3066, BW=12.0MiB/s (12.6MB/s)(41.8MiB/3487msec) 00:09:46.971 slat (usec): min=5, max=15598, avg=15.12, stdev=233.29 00:09:46.971 clat (usec): min=220, max=42112, avg=305.91, stdev=754.01 00:09:46.971 lat (usec): min=226, max=54046, avg=321.03, stdev=846.60 00:09:46.971 clat percentiles (usec): 00:09:46.971 | 1.00th=[ 235], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 260], 00:09:46.971 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:09:46.971 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 326], 95.00th=[ 367], 00:09:46.971 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[ 644], 99.95th=[ 1237], 00:09:46.971 | 99.99th=[41157] 00:09:46.971 bw ( KiB/s): min=12008, max=13920, per=46.43%, avg=13088.00, stdev=793.38, samples=6 00:09:46.971 iops : min= 3002, max= 3480, avg=3272.00, stdev=198.35, samples=6 00:09:46.971 lat (usec) : 250=9.18%, 500=88.98%, 750=1.76% 00:09:46.971 lat (msec) : 2=0.02%, 4=0.01%, 50=0.04% 00:09:46.971 cpu : usr=2.52%, sys=5.39%, ctx=10695, majf=0, minf=1 00:09:46.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.971 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.971 issued rwts: total=10692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.971 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1305684: Thu Jul 25 23:15:44 2024 00:09:46.971 read: IOPS=1243, BW=4973KiB/s (5092kB/s)(18.3MiB/3778msec) 00:09:46.971 slat (usec): min=5, max=29125, avg=23.47, stdev=496.33 00:09:46.971 clat (usec): min=215, max=45439, avg=778.44, stdev=4376.82 00:09:46.971 lat (usec): min=221, max=51912, avg=800.51, stdev=4423.82 00:09:46.971 clat percentiles (usec): 00:09:46.971 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:09:46.971 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 302], 00:09:46.971 | 70.00th=[ 334], 80.00th=[ 363], 90.00th=[ 441], 95.00th=[ 498], 00:09:46.971 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:46.971 | 99.99th=[45351] 00:09:46.971 bw ( KiB/s): min= 96, max=14216, per=18.10%, avg=5103.86, stdev=6039.22, samples=7 00:09:46.971 iops : min= 24, max= 3554, avg=1275.86, stdev=1509.72, samples=7 00:09:46.971 lat (usec) : 250=23.37%, 500=71.80%, 750=3.62% 00:09:46.971 lat (msec) : 2=0.04%, 50=1.15% 00:09:46.971 cpu : usr=1.01%, sys=1.64%, ctx=4705, majf=0, minf=1 00:09:46.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.971 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.971 issued rwts: total=4698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.971 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1305694: Thu Jul 25 23:15:44 2024 00:09:46.971 read: IOPS=386, BW=1545KiB/s (1582kB/s)(4968KiB/3215msec) 00:09:46.971 slat (nsec): min=5798, max=65280, avg=15402.19, stdev=7778.71 00:09:46.971 clat (usec): min=258, max=42020, avg=2551.11, stdev=9137.27 00:09:46.971 lat (usec): min=265, max=42055, avg=2566.50, stdev=9138.75 00:09:46.971 clat percentiles (usec): 00:09:46.971 | 1.00th=[ 281], 5.00th=[ 310], 10.00th=[ 326], 20.00th=[ 347], 00:09:46.971 | 30.00th=[ 359], 40.00th=[ 375], 50.00th=[ 392], 60.00th=[ 404], 00:09:46.971 | 70.00th=[ 437], 80.00th=[ 486], 90.00th=[ 553], 95.00th=[40633], 00:09:46.971 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:46.971 | 99.99th=[42206] 00:09:46.971 bw ( KiB/s): min= 88, max= 3792, per=5.85%, avg=1649.33, stdev=1796.73, samples=6 00:09:46.971 iops : min= 22, max= 948, avg=412.33, stdev=449.18, samples=6 00:09:46.971 lat (usec) : 500=82.22%, 750=12.39% 00:09:46.971 lat (msec) : 4=0.08%, 50=5.23% 00:09:46.971 cpu : usr=0.25%, sys=0.65%, ctx=1244, majf=0, minf=1 00:09:46.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.971 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.971 issued rwts: total=1243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.971 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1305695: Thu Jul 25 23:15:44 2024 00:09:46.971 read: IOPS=3416, BW=13.3MiB/s (14.0MB/s)(39.0MiB/2925msec) 00:09:46.971 slat (nsec): min=4366, max=66061, avg=12149.67, stdev=6441.45 00:09:46.971 clat (usec): min=210, max=2120, avg=275.34, stdev=47.84 00:09:46.971 lat (usec): min=215, max=2130, avg=287.49, stdev=50.07 00:09:46.971 clat percentiles (usec): 00:09:46.971 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:09:46.971 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 277], 00:09:46.971 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 330], 95.00th=[ 363], 00:09:46.971 | 99.00th=[ 408], 99.50th=[ 441], 99.90th=[ 529], 99.95th=[ 586], 00:09:46.971 | 99.99th=[ 2114] 00:09:46.971 bw ( KiB/s): min=12176, max=15312, per=48.69%, avg=13724.80, stdev=1432.61, samples=5 00:09:46.971 iops : min= 3044, max= 3828, avg=3431.20, stdev=358.15, samples=5 00:09:46.971 lat (usec) : 250=39.27%, 500=60.56%, 750=0.14%, 1000=0.01% 00:09:46.971 lat (msec) : 4=0.01% 00:09:46.971 cpu : usr=2.29%, sys=5.23%, ctx=9994, majf=0, minf=1 00:09:46.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.971 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.971 issued rwts: total=9993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.971 00:09:46.971 Run status group 0 (all jobs): 00:09:46.971 READ: bw=27.5MiB/s (28.9MB/s), 1545KiB/s-13.3MiB/s (1582kB/s-14.0MB/s), io=104MiB (109MB), run=2925-3778msec 00:09:46.971 00:09:46.971 Disk stats (read/write): 00:09:46.971 nvme0n1: ios=10294/0, merge=0/0, ticks=3053/0, in_queue=3053, util=94.74% 00:09:46.971 nvme0n2: ios=4736/0, merge=0/0, ticks=4509/0, in_queue=4509, util=98.55% 00:09:46.971 nvme0n3: ios=1286/0, merge=0/0, ticks=4014/0, in_queue=4014, util=99.84% 00:09:46.971 nvme0n4: ios=9847/0, merge=0/0, ticks=3874/0, in_queue=3874, util=99.80% 00:09:47.229 23:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:47.229 23:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:47.487 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:47.487 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:47.744 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:47.744 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:48.001 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:48.001 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:48.258 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:48.258 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1305493 00:09:48.258 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:48.258 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:48.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.258 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:48.258 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:48.258 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:48.258 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.258 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:48.258 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.258 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:48.517 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:48.517 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:48.517 nvmf hotplug test: fio failed as expected 00:09:48.517 23:15:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.517 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:48.517 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:48.776 rmmod nvme_tcp 00:09:48.776 rmmod nvme_fabrics 00:09:48.776 rmmod nvme_keyring 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1303556 ']' 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1303556 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1303556 ']' 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1303556 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1303556 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1303556' 00:09:48.776 killing process with pid 1303556 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1303556 00:09:48.776 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1303556 00:09:49.035 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:49.035 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:49.035 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:49.035 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:49.035 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:49.035 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.035 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.035 23:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.936 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:50.936 00:09:50.936 real 0m23.434s 00:09:50.936 user 1m21.056s 00:09:50.936 sys 0m7.532s 00:09:50.936 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.936 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.936 ************************************ 00:09:50.936 END TEST nvmf_fio_target 00:09:50.936 ************************************ 00:09:50.936 23:15:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:50.936 23:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:50.936 23:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.936 23:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.936 ************************************ 00:09:50.936 START TEST nvmf_bdevio 00:09:50.936 ************************************ 00:09:50.936 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:51.195 * Looking for test storage... 00:09:51.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.195 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.196 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:51.196 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:51.196 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:09:51.196 23:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.100 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:53.101 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:53.101 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:53.101 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:53.101 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:53.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:09:53.101 00:09:53.101 --- 10.0.0.2 ping statistics --- 00:09:53.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.101 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:53.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:09:53.101 00:09:53.101 --- 10.0.0.1 ping statistics --- 00:09:53.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.101 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:53.101 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:53.360 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:53.360 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:53.360 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:53.360 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.360 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1308319 00:09:53.360 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:53.360 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1308319 00:09:53.360 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1308319 ']' 00:09:53.360 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.360 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.360 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.360 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.360 23:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.360 [2024-07-25 23:15:50.892328] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:53.360 [2024-07-25 23:15:50.892433] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.360 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.360 [2024-07-25 23:15:50.929977] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:53.360 [2024-07-25 23:15:50.956943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.360 [2024-07-25 23:15:51.045847] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.360 [2024-07-25 23:15:51.045904] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.360 [2024-07-25 23:15:51.045926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.360 [2024-07-25 23:15:51.045943] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.360 [2024-07-25 23:15:51.045958] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.360 [2024-07-25 23:15:51.046104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:53.360 [2024-07-25 23:15:51.046152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:53.360 [2024-07-25 23:15:51.046227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:53.360 [2024-07-25 23:15:51.046232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.620 [2024-07-25 23:15:51.207536] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.620 Malloc0 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.620 [2024-07-25 23:15:51.261322] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:53.620 { 00:09:53.620 "params": { 00:09:53.620 "name": "Nvme$subsystem", 00:09:53.620 "trtype": "$TEST_TRANSPORT", 00:09:53.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.620 "adrfam": "ipv4", 00:09:53.620 "trsvcid": "$NVMF_PORT", 00:09:53.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.620 "hdgst": ${hdgst:-false}, 00:09:53.620 "ddgst": ${ddgst:-false} 00:09:53.620 }, 00:09:53.620 "method": "bdev_nvme_attach_controller" 00:09:53.620 } 00:09:53.620 EOF 00:09:53.620 )") 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:53.620 23:15:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:53.620 "params": { 00:09:53.620 "name": "Nvme1", 00:09:53.620 "trtype": "tcp", 00:09:53.620 "traddr": "10.0.0.2", 00:09:53.621 "adrfam": "ipv4", 00:09:53.621 "trsvcid": "4420", 00:09:53.621 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.621 "hdgst": false, 00:09:53.621 "ddgst": false 00:09:53.621 }, 00:09:53.621 "method": "bdev_nvme_attach_controller" 00:09:53.621 }' 00:09:53.621 [2024-07-25 23:15:51.309495] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:09:53.621 [2024-07-25 23:15:51.309563] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308348 ] 00:09:53.621 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.621 [2024-07-25 23:15:51.341089] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:53.880 [2024-07-25 23:15:51.370701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:53.880 [2024-07-25 23:15:51.463421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.880 [2024-07-25 23:15:51.463447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.880 [2024-07-25 23:15:51.463450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.140 I/O targets: 00:09:54.140 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:54.140 00:09:54.140 00:09:54.140 CUnit - A unit testing framework for C - Version 2.1-3 00:09:54.140 http://cunit.sourceforge.net/ 00:09:54.140 00:09:54.140 00:09:54.140 Suite: bdevio tests on: Nvme1n1 00:09:54.399 Test: blockdev write read block ...passed 00:09:54.399 Test: blockdev write zeroes read block ...passed 00:09:54.399 Test: blockdev write zeroes read no split ...passed 00:09:54.399 Test: blockdev write zeroes read split ...passed 00:09:54.399 Test: blockdev write zeroes read split partial ...passed 00:09:54.399 Test: blockdev reset ...[2024-07-25 23:15:52.005337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:54.399 [2024-07-25 23:15:52.005451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x950940 (9): Bad file descriptor 00:09:54.399 [2024-07-25 23:15:52.103820] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:54.399 passed 00:09:54.399 Test: blockdev write read 8 blocks ...passed 00:09:54.399 Test: blockdev write read size > 128k ...passed 00:09:54.399 Test: blockdev write read invalid size ...passed 00:09:54.659 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:54.659 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:54.659 Test: blockdev write read max offset ...passed 00:09:54.659 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:54.659 Test: blockdev writev readv 8 blocks ...passed 00:09:54.659 Test: blockdev writev readv 30 x 1block ...passed 00:09:54.659 Test: blockdev writev readv block ...passed 00:09:54.659 Test: blockdev writev readv size > 128k ...passed 00:09:54.659 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:54.659 Test: blockdev comparev and writev ...[2024-07-25 23:15:52.276027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:54.659 [2024-07-25 23:15:52.276071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:54.659 [2024-07-25 23:15:52.276105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:54.659 [2024-07-25 23:15:52.276123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:54.659 [2024-07-25 23:15:52.276478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:54.659 [2024-07-25 23:15:52.276503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:54.659 [2024-07-25 23:15:52.276525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:54.659 [2024-07-25 23:15:52.276541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:54.659 [2024-07-25 23:15:52.276894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:54.659 [2024-07-25 23:15:52.276918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:54.659 [2024-07-25 23:15:52.276939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:54.659 [2024-07-25 23:15:52.276955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:54.659 [2024-07-25 23:15:52.277325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:54.659 [2024-07-25 23:15:52.277349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:54.659 [2024-07-25 23:15:52.277370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:54.659 [2024-07-25 23:15:52.277393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:54.659 passed 00:09:54.659 Test: blockdev nvme passthru rw ...passed 00:09:54.659 Test: blockdev nvme passthru vendor specific ...[2024-07-25 23:15:52.359363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:54.660 [2024-07-25 23:15:52.359390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:54.660 [2024-07-25 23:15:52.359557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:54.660 [2024-07-25 23:15:52.359578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:54.660 [2024-07-25 23:15:52.359747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:54.660 [2024-07-25 23:15:52.359769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:54.660 [2024-07-25 23:15:52.359931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:54.660 [2024-07-25 23:15:52.359952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:54.660 passed 00:09:54.660 Test: blockdev nvme admin passthru ...passed 00:09:54.919 Test: blockdev copy ...passed 00:09:54.919 00:09:54.919 Run Summary: Type Total Ran Passed Failed Inactive 00:09:54.919 suites 1 1 n/a 0 0 00:09:54.919 tests 23 23 23 0 0 00:09:54.919 asserts 152 152 152 0 n/a 00:09:54.919 00:09:54.919 Elapsed time = 1.147 seconds 00:09:54.919 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:54.919 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.919 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.919 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.919 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:54.919 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:54.919 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:54.919 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:54.919 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:54.919 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:54.919 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:54.919 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:54.919 rmmod nvme_tcp 00:09:55.179 rmmod nvme_fabrics 00:09:55.179 rmmod nvme_keyring 00:09:55.179 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:55.179 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:55.179 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:55.179 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1308319 ']' 00:09:55.179 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1308319 00:09:55.179 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1308319 ']' 00:09:55.179 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1308319 00:09:55.179 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:55.179 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:55.179 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1308319 00:09:55.179 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:55.179 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:55.179 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1308319' 00:09:55.179 killing process with pid 1308319 00:09:55.179 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1308319 00:09:55.179 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1308319 00:09:55.439 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:55.439 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:55.439 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:55.439 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:55.439 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:55.439 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.439 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.439 23:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.344 23:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:57.344 00:09:57.344 real 0m6.368s 00:09:57.344 user 0m10.972s 00:09:57.344 sys 0m2.076s 00:09:57.344 23:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.344 23:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.344 ************************************ 00:09:57.344 END TEST nvmf_bdevio 00:09:57.344 ************************************ 00:09:57.344 23:15:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:57.344 00:09:57.344 real 3m49.815s 00:09:57.344 user 9m50.584s 00:09:57.344 sys 1m9.752s 00:09:57.344 23:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.344 23:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.344 ************************************ 00:09:57.345 END TEST nvmf_target_core 00:09:57.345 ************************************ 00:09:57.604 23:15:55 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:57.604 23:15:55 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:57.604 23:15:55 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.604 23:15:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:57.604 ************************************ 00:09:57.604 START TEST nvmf_target_extra 00:09:57.604 ************************************ 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:57.604 * Looking for test storage... 00:09:57.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:57.604 ************************************ 00:09:57.604 START TEST nvmf_example 00:09:57.604 ************************************ 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:57.604 * Looking for test storage... 00:09:57.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.604 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:09:57.605 23:15:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:00.136 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:00.136 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:00.136 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:00.136 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.136 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:00.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:10:00.137 00:10:00.137 --- 10.0.0.2 ping statistics --- 00:10:00.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.137 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:10:00.137 00:10:00.137 --- 10.0.0.1 ping statistics --- 00:10:00.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.137 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1310587 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1310587 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1310587 ']' 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.137 23:15:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:00.137 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:01.071 23:15:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:01.071 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.170 Initializing NVMe Controllers 00:10:11.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:11.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:11.170 Initialization complete. Launching workers. 00:10:11.170 ======================================================== 00:10:11.170 Latency(us) 00:10:11.170 Device Information : IOPS MiB/s Average min max 00:10:11.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15405.17 60.18 4155.30 895.02 15604.31 00:10:11.170 ======================================================== 00:10:11.170 Total : 15405.17 60.18 4155.30 895.02 15604.31 00:10:11.170 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.170 rmmod nvme_tcp 00:10:11.170 rmmod nvme_fabrics 00:10:11.170 rmmod nvme_keyring 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1310587 ']' 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1310587 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1310587 ']' 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1310587 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1310587 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1310587' 00:10:11.170 killing process with pid 1310587 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1310587 00:10:11.170 23:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1310587 00:10:11.430 nvmf threads initialize successfully 00:10:11.430 bdev subsystem init successfully 00:10:11.430 created a nvmf target service 00:10:11.430 create targets's poll groups done 00:10:11.430 all subsystems of target started 00:10:11.430 nvmf target is running 00:10:11.430 all subsystems of target stopped 00:10:11.430 destroy targets's poll groups done 00:10:11.430 destroyed the nvmf target service 00:10:11.430 bdev subsystem finish successfully 00:10:11.430 nvmf threads destroy successfully 00:10:11.430 23:16:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:11.430 23:16:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:11.430 23:16:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:11.430 23:16:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:11.430 23:16:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:11.430 23:16:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.430 23:16:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.430 23:16:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:13.968 00:10:13.968 real 0m15.929s 00:10:13.968 user 0m45.121s 00:10:13.968 sys 0m3.348s 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:13.968 ************************************ 00:10:13.968 END TEST nvmf_example 00:10:13.968 ************************************ 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:13.968 ************************************ 00:10:13.968 START TEST nvmf_filesystem 00:10:13.968 ************************************ 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:13.968 * Looking for test storage... 00:10:13.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:13.968 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:13.969 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:13.970 #define SPDK_CONFIG_H 00:10:13.970 #define SPDK_CONFIG_APPS 1 00:10:13.970 #define SPDK_CONFIG_ARCH native 00:10:13.970 #undef SPDK_CONFIG_ASAN 00:10:13.970 #undef SPDK_CONFIG_AVAHI 00:10:13.970 #undef SPDK_CONFIG_CET 00:10:13.970 #define SPDK_CONFIG_COVERAGE 1 00:10:13.970 #define SPDK_CONFIG_CROSS_PREFIX 00:10:13.970 #undef SPDK_CONFIG_CRYPTO 00:10:13.970 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:13.970 #undef SPDK_CONFIG_CUSTOMOCF 00:10:13.970 #undef SPDK_CONFIG_DAOS 00:10:13.970 #define SPDK_CONFIG_DAOS_DIR 00:10:13.970 #define SPDK_CONFIG_DEBUG 1 00:10:13.970 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:13.970 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:13.970 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:13.970 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:13.970 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:13.970 #undef SPDK_CONFIG_DPDK_UADK 00:10:13.970 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:13.970 #define SPDK_CONFIG_EXAMPLES 1 00:10:13.970 #undef SPDK_CONFIG_FC 00:10:13.970 #define SPDK_CONFIG_FC_PATH 00:10:13.970 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:13.970 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:13.970 #undef SPDK_CONFIG_FUSE 00:10:13.970 #undef SPDK_CONFIG_FUZZER 00:10:13.970 #define SPDK_CONFIG_FUZZER_LIB 00:10:13.970 #undef SPDK_CONFIG_GOLANG 00:10:13.970 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:13.970 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:13.970 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:13.970 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:13.970 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:13.970 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:13.970 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:13.970 #define SPDK_CONFIG_IDXD 1 00:10:13.970 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:13.970 #undef SPDK_CONFIG_IPSEC_MB 00:10:13.970 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:13.970 #define SPDK_CONFIG_ISAL 1 00:10:13.970 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:13.970 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:13.970 #define SPDK_CONFIG_LIBDIR 00:10:13.970 #undef SPDK_CONFIG_LTO 00:10:13.970 #define SPDK_CONFIG_MAX_LCORES 128 00:10:13.970 #define SPDK_CONFIG_NVME_CUSE 1 00:10:13.970 #undef SPDK_CONFIG_OCF 00:10:13.970 #define SPDK_CONFIG_OCF_PATH 00:10:13.970 #define SPDK_CONFIG_OPENSSL_PATH 00:10:13.970 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:13.970 #define SPDK_CONFIG_PGO_DIR 00:10:13.970 #undef SPDK_CONFIG_PGO_USE 00:10:13.970 #define SPDK_CONFIG_PREFIX /usr/local 00:10:13.970 #undef SPDK_CONFIG_RAID5F 00:10:13.970 #undef SPDK_CONFIG_RBD 00:10:13.970 #define SPDK_CONFIG_RDMA 1 00:10:13.970 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:13.970 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:13.970 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:13.970 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:13.970 #define SPDK_CONFIG_SHARED 1 00:10:13.970 #undef SPDK_CONFIG_SMA 00:10:13.970 #define SPDK_CONFIG_TESTS 1 00:10:13.970 #undef SPDK_CONFIG_TSAN 00:10:13.970 #define SPDK_CONFIG_UBLK 1 00:10:13.970 #define SPDK_CONFIG_UBSAN 1 00:10:13.970 #undef SPDK_CONFIG_UNIT_TESTS 00:10:13.970 #undef SPDK_CONFIG_URING 00:10:13.970 #define SPDK_CONFIG_URING_PATH 00:10:13.970 #undef SPDK_CONFIG_URING_ZNS 00:10:13.970 #undef SPDK_CONFIG_USDT 00:10:13.970 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:13.970 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:13.970 #define SPDK_CONFIG_VFIO_USER 1 00:10:13.970 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:13.970 #define SPDK_CONFIG_VHOST 1 00:10:13.970 #define SPDK_CONFIG_VIRTIO 1 00:10:13.970 #undef SPDK_CONFIG_VTUNE 00:10:13.970 #define SPDK_CONFIG_VTUNE_DIR 00:10:13.970 #define SPDK_CONFIG_WERROR 1 00:10:13.970 #define SPDK_CONFIG_WPDK_DIR 00:10:13.970 #undef SPDK_CONFIG_XNVME 00:10:13.970 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:13.970 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : main 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:10:13.971 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:13.972 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 1312288 ]] 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 1312288 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.S3txCB 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.S3txCB/tests/target /tmp/spdk.S3txCB 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=953643008 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330786816 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=54042447872 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994713088 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=7952265216 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30935175168 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:10:13.973 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12376535040 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398944256 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22409216 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30996799488 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=557056 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6199463936 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199468032 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:10:13.974 * Looking for test storage... 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=54042447872 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=10166857728 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.974 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:13.975 23:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:15.880 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:15.880 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:15.880 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:15.880 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:15.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:10:15.880 00:10:15.880 --- 10.0.0.2 ping statistics --- 00:10:15.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.880 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:10:15.880 00:10:15.880 --- 10.0.0.1 ping statistics --- 00:10:15.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.880 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:15.880 ************************************ 00:10:15.880 START TEST nvmf_filesystem_no_in_capsule 00:10:15.880 ************************************ 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:15.880 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:15.881 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:15.881 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:15.881 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:15.881 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.881 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1313913 00:10:15.881 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:15.881 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1313913 00:10:15.881 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1313913 ']' 00:10:15.881 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.881 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.881 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.881 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.881 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.881 [2024-07-25 23:16:13.580564] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:10:15.881 [2024-07-25 23:16:13.580660] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.138 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.138 [2024-07-25 23:16:13.621073] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:16.138 [2024-07-25 23:16:13.648636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.138 [2024-07-25 23:16:13.738620] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.138 [2024-07-25 23:16:13.738680] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.138 [2024-07-25 23:16:13.738708] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.138 [2024-07-25 23:16:13.738720] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.138 [2024-07-25 23:16:13.738730] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.138 [2024-07-25 23:16:13.738817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.138 [2024-07-25 23:16:13.738891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.138 [2024-07-25 23:16:13.738862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.138 [2024-07-25 23:16:13.738892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.395 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.395 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:16.395 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:16.395 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:16.396 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.396 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.396 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:16.396 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:16.396 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.396 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.396 [2024-07-25 23:16:13.894540] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.396 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.396 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:16.396 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.396 23:16:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.396 Malloc1 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.396 [2024-07-25 23:16:14.078932] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:16.396 { 00:10:16.396 "name": "Malloc1", 00:10:16.396 "aliases": [ 00:10:16.396 "e7eed67b-57a4-4727-b5c3-952af8743763" 00:10:16.396 ], 00:10:16.396 "product_name": "Malloc disk", 00:10:16.396 "block_size": 512, 00:10:16.396 "num_blocks": 1048576, 00:10:16.396 "uuid": "e7eed67b-57a4-4727-b5c3-952af8743763", 00:10:16.396 "assigned_rate_limits": { 00:10:16.396 "rw_ios_per_sec": 0, 00:10:16.396 "rw_mbytes_per_sec": 0, 00:10:16.396 "r_mbytes_per_sec": 0, 00:10:16.396 "w_mbytes_per_sec": 0 00:10:16.396 }, 00:10:16.396 "claimed": true, 00:10:16.396 "claim_type": "exclusive_write", 00:10:16.396 "zoned": false, 00:10:16.396 "supported_io_types": { 00:10:16.396 "read": true, 00:10:16.396 "write": true, 00:10:16.396 "unmap": true, 00:10:16.396 "flush": true, 00:10:16.396 "reset": true, 00:10:16.396 "nvme_admin": false, 00:10:16.396 "nvme_io": false, 00:10:16.396 "nvme_io_md": false, 00:10:16.396 "write_zeroes": true, 00:10:16.396 "zcopy": true, 00:10:16.396 "get_zone_info": false, 00:10:16.396 "zone_management": false, 00:10:16.396 "zone_append": false, 00:10:16.396 "compare": false, 00:10:16.396 "compare_and_write": false, 00:10:16.396 "abort": true, 00:10:16.396 "seek_hole": false, 00:10:16.396 "seek_data": false, 00:10:16.396 "copy": true, 00:10:16.396 "nvme_iov_md": false 00:10:16.396 }, 00:10:16.396 "memory_domains": [ 00:10:16.396 { 00:10:16.396 "dma_device_id": "system", 00:10:16.396 "dma_device_type": 1 00:10:16.396 }, 00:10:16.396 { 00:10:16.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.396 "dma_device_type": 2 00:10:16.396 } 00:10:16.396 ], 00:10:16.396 "driver_specific": {} 00:10:16.396 } 00:10:16.396 ]' 00:10:16.396 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:16.653 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:16.653 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:16.653 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:16.653 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:16.653 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:16.653 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:16.653 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:17.218 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:17.218 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:17.218 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:17.218 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:17.218 23:16:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:19.744 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:19.744 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:19.744 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:19.744 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:19.744 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:19.744 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:19.744 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:19.744 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:19.744 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:19.744 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:19.744 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:19.745 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:19.745 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:19.745 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:19.745 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:19.745 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:19.745 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:19.745 23:16:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:20.002 23:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:20.934 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:20.934 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:20.934 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:20.934 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.934 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.934 ************************************ 00:10:20.934 START TEST filesystem_ext4 00:10:20.934 ************************************ 00:10:20.934 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:20.934 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:20.934 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:20.934 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:20.934 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:20.934 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:20.934 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:20.934 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:20.934 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:20.934 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:20.934 23:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:20.934 mke2fs 1.46.5 (30-Dec-2021) 00:10:20.934 Discarding device blocks: 0/522240 done 00:10:21.192 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:21.192 Filesystem UUID: ce79c7e1-547d-4c06-ae96-59296abedcda 00:10:21.192 Superblock backups stored on blocks: 00:10:21.192 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:21.192 00:10:21.192 Allocating group tables: 0/64 done 00:10:21.192 Writing inode tables: 0/64 done 00:10:22.564 Creating journal (8192 blocks): done 00:10:23.495 Writing superblocks and filesystem accounting information: 0/64 done 00:10:23.495 00:10:23.495 23:16:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:23.495 23:16:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:24.061 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:24.061 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:24.061 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:24.061 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:24.061 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:24.061 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:24.061 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1313913 00:10:24.061 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:24.061 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:24.061 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:24.061 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:24.061 00:10:24.061 real 0m3.198s 00:10:24.061 user 0m0.019s 00:10:24.061 sys 0m0.059s 00:10:24.061 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.061 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:24.061 ************************************ 00:10:24.061 END TEST filesystem_ext4 00:10:24.061 ************************************ 00:10:24.061 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:24.061 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:24.062 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.062 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.062 ************************************ 00:10:24.062 START TEST filesystem_btrfs 00:10:24.062 ************************************ 00:10:24.062 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:24.062 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:24.062 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:24.062 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:24.062 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:24.062 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:24.062 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:24.062 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:24.062 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:24.320 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:24.320 23:16:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:24.577 btrfs-progs v6.6.2 00:10:24.577 See https://btrfs.readthedocs.io for more information. 00:10:24.577 00:10:24.578 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:24.578 NOTE: several default settings have changed in version 5.15, please make sure 00:10:24.578 this does not affect your deployments: 00:10:24.578 - DUP for metadata (-m dup) 00:10:24.578 - enabled no-holes (-O no-holes) 00:10:24.578 - enabled free-space-tree (-R free-space-tree) 00:10:24.578 00:10:24.578 Label: (null) 00:10:24.578 UUID: ca37277f-12f5-4484-b319-bea825e64f93 00:10:24.578 Node size: 16384 00:10:24.578 Sector size: 4096 00:10:24.578 Filesystem size: 510.00MiB 00:10:24.578 Block group profiles: 00:10:24.578 Data: single 8.00MiB 00:10:24.578 Metadata: DUP 32.00MiB 00:10:24.578 System: DUP 8.00MiB 00:10:24.578 SSD detected: yes 00:10:24.578 Zoned device: no 00:10:24.578 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:24.578 Runtime features: free-space-tree 00:10:24.578 Checksum: crc32c 00:10:24.578 Number of devices: 1 00:10:24.578 Devices: 00:10:24.578 ID SIZE PATH 00:10:24.578 1 510.00MiB /dev/nvme0n1p1 00:10:24.578 00:10:24.578 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:24.578 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:25.510 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:25.510 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:25.510 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:25.510 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:25.510 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:25.510 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:25.510 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1313913 00:10:25.510 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:25.510 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:25.510 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:25.510 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:25.510 00:10:25.510 real 0m1.187s 00:10:25.510 user 0m0.025s 00:10:25.510 sys 0m0.105s 00:10:25.511 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.511 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:25.511 ************************************ 00:10:25.511 END TEST filesystem_btrfs 00:10:25.511 ************************************ 00:10:25.511 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:25.511 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:25.511 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.511 23:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.511 ************************************ 00:10:25.511 START TEST filesystem_xfs 00:10:25.511 ************************************ 00:10:25.511 23:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:25.511 23:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:25.511 23:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:25.511 23:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:25.511 23:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:25.511 23:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:25.511 23:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:25.511 23:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:25.511 23:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:25.511 23:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:25.511 23:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:25.511 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:25.511 = sectsz=512 attr=2, projid32bit=1 00:10:25.511 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:25.511 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:25.511 data = bsize=4096 blocks=130560, imaxpct=25 00:10:25.511 = sunit=0 swidth=0 blks 00:10:25.511 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:25.511 log =internal log bsize=4096 blocks=16384, version=2 00:10:25.511 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:25.511 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:26.439 Discarding blocks...Done. 00:10:26.439 23:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:26.439 23:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1313913 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:28.337 00:10:28.337 real 0m2.824s 00:10:28.337 user 0m0.016s 00:10:28.337 sys 0m0.064s 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:28.337 ************************************ 00:10:28.337 END TEST filesystem_xfs 00:10:28.337 ************************************ 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:28.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1313913 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1313913 ']' 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1313913 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:28.337 23:16:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1313913 00:10:28.337 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:28.337 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:28.337 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1313913' 00:10:28.337 killing process with pid 1313913 00:10:28.337 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1313913 00:10:28.337 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1313913 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:28.916 00:10:28.916 real 0m12.910s 00:10:28.916 user 0m49.687s 00:10:28.916 sys 0m1.858s 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.916 ************************************ 00:10:28.916 END TEST nvmf_filesystem_no_in_capsule 00:10:28.916 ************************************ 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:28.916 ************************************ 00:10:28.916 START TEST nvmf_filesystem_in_capsule 00:10:28.916 ************************************ 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1315607 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1315607 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1315607 ']' 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:28.916 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.916 [2024-07-25 23:16:26.537125] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:10:28.916 [2024-07-25 23:16:26.537227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.916 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.916 [2024-07-25 23:16:26.575724] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:28.916 [2024-07-25 23:16:26.601858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.176 [2024-07-25 23:16:26.691808] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.176 [2024-07-25 23:16:26.691864] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.176 [2024-07-25 23:16:26.691892] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.176 [2024-07-25 23:16:26.691904] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.176 [2024-07-25 23:16:26.691914] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.176 [2024-07-25 23:16:26.691996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.176 [2024-07-25 23:16:26.692067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.176 [2024-07-25 23:16:26.692129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.176 [2024-07-25 23:16:26.692132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.176 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:29.176 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:29.176 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:29.176 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:29.176 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.176 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.176 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:29.176 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:29.176 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.177 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.177 [2024-07-25 23:16:26.848588] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.177 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.177 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:29.177 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.177 23:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.434 Malloc1 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.434 [2024-07-25 23:16:27.033326] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:29.434 { 00:10:29.434 "name": "Malloc1", 00:10:29.434 "aliases": [ 00:10:29.434 "a05b1e19-aaa1-4dab-9fec-1a22f8a97b5a" 00:10:29.434 ], 00:10:29.434 "product_name": "Malloc disk", 00:10:29.434 "block_size": 512, 00:10:29.434 "num_blocks": 1048576, 00:10:29.434 "uuid": "a05b1e19-aaa1-4dab-9fec-1a22f8a97b5a", 00:10:29.434 "assigned_rate_limits": { 00:10:29.434 "rw_ios_per_sec": 0, 00:10:29.434 "rw_mbytes_per_sec": 0, 00:10:29.434 "r_mbytes_per_sec": 0, 00:10:29.434 "w_mbytes_per_sec": 0 00:10:29.434 }, 00:10:29.434 "claimed": true, 00:10:29.434 "claim_type": "exclusive_write", 00:10:29.434 "zoned": false, 00:10:29.434 "supported_io_types": { 00:10:29.434 "read": true, 00:10:29.434 "write": true, 00:10:29.434 "unmap": true, 00:10:29.434 "flush": true, 00:10:29.434 "reset": true, 00:10:29.434 "nvme_admin": false, 00:10:29.434 "nvme_io": false, 00:10:29.434 "nvme_io_md": false, 00:10:29.434 "write_zeroes": true, 00:10:29.434 "zcopy": true, 00:10:29.434 "get_zone_info": false, 00:10:29.434 "zone_management": false, 00:10:29.434 "zone_append": false, 00:10:29.434 "compare": false, 00:10:29.434 "compare_and_write": false, 00:10:29.434 "abort": true, 00:10:29.434 "seek_hole": false, 00:10:29.434 "seek_data": false, 00:10:29.434 "copy": true, 00:10:29.434 "nvme_iov_md": false 00:10:29.434 }, 00:10:29.434 "memory_domains": [ 00:10:29.434 { 00:10:29.434 "dma_device_id": "system", 00:10:29.434 "dma_device_type": 1 00:10:29.434 }, 00:10:29.434 { 00:10:29.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.434 "dma_device_type": 2 00:10:29.434 } 00:10:29.434 ], 00:10:29.434 "driver_specific": {} 00:10:29.434 } 00:10:29.434 ]' 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:29.434 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:30.365 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:30.365 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:30.365 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:30.365 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:30.365 23:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:32.261 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:32.261 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:32.261 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:32.261 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:32.261 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:32.261 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:32.261 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:32.261 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:32.261 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:32.261 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:32.261 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:32.261 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:32.261 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:32.261 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:32.261 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:32.261 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:32.261 23:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:32.519 23:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:33.451 23:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:34.383 23:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:34.383 23:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:34.383 23:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:34.383 23:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.383 23:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.383 ************************************ 00:10:34.383 START TEST filesystem_in_capsule_ext4 00:10:34.383 ************************************ 00:10:34.383 23:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:34.383 23:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:34.383 23:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:34.383 23:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:34.383 23:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:34.383 23:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:34.383 23:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:34.383 23:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:34.383 23:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:34.383 23:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:34.383 23:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:34.383 mke2fs 1.46.5 (30-Dec-2021) 00:10:34.639 Discarding device blocks: 0/522240 done 00:10:34.639 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:34.639 Filesystem UUID: aad5c364-21c1-4398-8382-a884177b60d6 00:10:34.639 Superblock backups stored on blocks: 00:10:34.639 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:34.639 00:10:34.639 Allocating group tables: 0/64 done 00:10:34.639 Writing inode tables: 0/64 done 00:10:34.896 Creating journal (8192 blocks): done 00:10:35.716 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:10:35.716 00:10:35.716 23:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:35.716 23:16:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1315607 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:36.646 00:10:36.646 real 0m2.229s 00:10:36.646 user 0m0.021s 00:10:36.646 sys 0m0.047s 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:36.646 ************************************ 00:10:36.646 END TEST filesystem_in_capsule_ext4 00:10:36.646 ************************************ 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.646 ************************************ 00:10:36.646 START TEST filesystem_in_capsule_btrfs 00:10:36.646 ************************************ 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:36.646 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:36.904 btrfs-progs v6.6.2 00:10:36.904 See https://btrfs.readthedocs.io for more information. 00:10:36.904 00:10:36.904 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:36.904 NOTE: several default settings have changed in version 5.15, please make sure 00:10:36.904 this does not affect your deployments: 00:10:36.904 - DUP for metadata (-m dup) 00:10:36.904 - enabled no-holes (-O no-holes) 00:10:36.904 - enabled free-space-tree (-R free-space-tree) 00:10:36.904 00:10:36.904 Label: (null) 00:10:36.904 UUID: d0301eee-3e31-4bf2-980a-ab9b6720a59f 00:10:36.904 Node size: 16384 00:10:36.904 Sector size: 4096 00:10:36.904 Filesystem size: 510.00MiB 00:10:36.904 Block group profiles: 00:10:36.904 Data: single 8.00MiB 00:10:36.904 Metadata: DUP 32.00MiB 00:10:36.904 System: DUP 8.00MiB 00:10:36.904 SSD detected: yes 00:10:36.904 Zoned device: no 00:10:36.904 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:36.904 Runtime features: free-space-tree 00:10:36.904 Checksum: crc32c 00:10:36.904 Number of devices: 1 00:10:36.904 Devices: 00:10:36.904 ID SIZE PATH 00:10:36.904 1 510.00MiB /dev/nvme0n1p1 00:10:36.904 00:10:36.904 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:36.904 23:16:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1315607 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:37.837 00:10:37.837 real 0m1.129s 00:10:37.837 user 0m0.025s 00:10:37.837 sys 0m0.110s 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:37.837 ************************************ 00:10:37.837 END TEST filesystem_in_capsule_btrfs 00:10:37.837 ************************************ 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.837 ************************************ 00:10:37.837 START TEST filesystem_in_capsule_xfs 00:10:37.837 ************************************ 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:37.837 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:37.838 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:37.838 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:37.838 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:10:37.838 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:37.838 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:37.838 23:16:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:37.838 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:37.838 = sectsz=512 attr=2, projid32bit=1 00:10:37.838 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:37.838 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:37.838 data = bsize=4096 blocks=130560, imaxpct=25 00:10:37.838 = sunit=0 swidth=0 blks 00:10:37.838 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:37.838 log =internal log bsize=4096 blocks=16384, version=2 00:10:37.838 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:37.838 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:38.771 Discarding blocks...Done. 00:10:38.771 23:16:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:38.771 23:16:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1315607 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:41.297 00:10:41.297 real 0m3.388s 00:10:41.297 user 0m0.012s 00:10:41.297 sys 0m0.065s 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:41.297 ************************************ 00:10:41.297 END TEST filesystem_in_capsule_xfs 00:10:41.297 ************************************ 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:41.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:41.297 23:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.297 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:41.297 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.297 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:41.297 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.297 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.298 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.556 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.556 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:41.556 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1315607 00:10:41.556 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1315607 ']' 00:10:41.556 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1315607 00:10:41.556 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:41.556 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:41.556 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1315607 00:10:41.556 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:41.556 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:41.556 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1315607' 00:10:41.556 killing process with pid 1315607 00:10:41.556 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1315607 00:10:41.556 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1315607 00:10:41.814 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:41.814 00:10:41.814 real 0m13.004s 00:10:41.814 user 0m50.063s 00:10:41.814 sys 0m1.821s 00:10:41.814 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:41.814 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.814 ************************************ 00:10:41.814 END TEST nvmf_filesystem_in_capsule 00:10:41.814 ************************************ 00:10:41.814 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:41.814 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:41.814 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:10:41.814 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:41.815 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:10:41.815 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:41.815 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:41.815 rmmod nvme_tcp 00:10:41.815 rmmod nvme_fabrics 00:10:42.074 rmmod nvme_keyring 00:10:42.074 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:42.074 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:10:42.074 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:10:42.074 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:42.074 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:42.074 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:42.074 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:42.074 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:42.074 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:42.074 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.074 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.074 23:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.982 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:43.982 00:10:43.982 real 0m30.429s 00:10:43.982 user 1m40.628s 00:10:43.982 sys 0m5.310s 00:10:43.982 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.982 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:43.982 ************************************ 00:10:43.982 END TEST nvmf_filesystem 00:10:43.982 ************************************ 00:10:43.982 23:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:43.982 23:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:43.982 23:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.982 23:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:43.982 ************************************ 00:10:43.982 START TEST nvmf_target_discovery 00:10:43.982 ************************************ 00:10:43.982 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:43.982 * Looking for test storage... 00:10:44.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.241 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:10:44.242 23:16:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:46.144 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:46.144 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:46.144 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:46.144 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:46.144 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:46.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:10:46.145 00:10:46.145 --- 10.0.0.2 ping statistics --- 00:10:46.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.145 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:10:46.145 00:10:46.145 --- 10.0.0.1 ping statistics --- 00:10:46.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.145 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1319212 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1319212 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1319212 ']' 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:46.145 23:16:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.402 [2024-07-25 23:16:43.908074] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:10:46.402 [2024-07-25 23:16:43.908170] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.402 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.402 [2024-07-25 23:16:43.945844] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:46.402 [2024-07-25 23:16:43.974161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.402 [2024-07-25 23:16:44.069747] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.402 [2024-07-25 23:16:44.069801] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.402 [2024-07-25 23:16:44.069818] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.402 [2024-07-25 23:16:44.069831] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.402 [2024-07-25 23:16:44.069843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.402 [2024-07-25 23:16:44.069935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.402 [2024-07-25 23:16:44.070007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.402 [2024-07-25 23:16:44.070054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.402 [2024-07-25 23:16:44.070055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.661 [2024-07-25 23:16:44.228615] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.661 Null1 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.661 [2024-07-25 23:16:44.268938] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.661 Null2 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.661 Null3 00:10:46.661 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 Null4 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.662 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:10:46.920 00:10:46.920 Discovery Log Number of Records 6, Generation counter 6 00:10:46.920 =====Discovery Log Entry 0====== 00:10:46.920 trtype: tcp 00:10:46.920 adrfam: ipv4 00:10:46.920 subtype: current discovery subsystem 00:10:46.920 treq: not required 00:10:46.920 portid: 0 00:10:46.920 trsvcid: 4420 00:10:46.920 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:46.920 traddr: 10.0.0.2 00:10:46.920 eflags: explicit discovery connections, duplicate discovery information 00:10:46.920 sectype: none 00:10:46.920 =====Discovery Log Entry 1====== 00:10:46.920 trtype: tcp 00:10:46.920 adrfam: ipv4 00:10:46.920 subtype: nvme subsystem 00:10:46.920 treq: not required 00:10:46.920 portid: 0 00:10:46.920 trsvcid: 4420 00:10:46.920 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:46.920 traddr: 10.0.0.2 00:10:46.920 eflags: none 00:10:46.920 sectype: none 00:10:46.920 =====Discovery Log Entry 2====== 00:10:46.920 trtype: tcp 00:10:46.920 adrfam: ipv4 00:10:46.920 subtype: nvme subsystem 00:10:46.920 treq: not required 00:10:46.920 portid: 0 00:10:46.920 trsvcid: 4420 00:10:46.920 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:46.920 traddr: 10.0.0.2 00:10:46.920 eflags: none 00:10:46.920 sectype: none 00:10:46.920 =====Discovery Log Entry 3====== 00:10:46.920 trtype: tcp 00:10:46.920 adrfam: ipv4 00:10:46.920 subtype: nvme subsystem 00:10:46.920 treq: not required 00:10:46.920 portid: 0 00:10:46.920 trsvcid: 4420 00:10:46.920 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:46.920 traddr: 10.0.0.2 00:10:46.920 eflags: none 00:10:46.920 sectype: none 00:10:46.920 =====Discovery Log Entry 4====== 00:10:46.920 trtype: tcp 00:10:46.920 adrfam: ipv4 00:10:46.920 subtype: nvme subsystem 00:10:46.920 treq: not required 00:10:46.920 portid: 0 00:10:46.920 trsvcid: 4420 00:10:46.920 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:46.920 traddr: 10.0.0.2 00:10:46.920 eflags: none 00:10:46.920 sectype: none 00:10:46.920 =====Discovery Log Entry 5====== 00:10:46.920 trtype: tcp 00:10:46.920 adrfam: ipv4 00:10:46.920 subtype: discovery subsystem referral 00:10:46.920 treq: not required 00:10:46.920 portid: 0 00:10:46.920 trsvcid: 4430 00:10:46.920 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:46.920 traddr: 10.0.0.2 00:10:46.920 eflags: none 00:10:46.920 sectype: none 00:10:46.920 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:46.920 Perform nvmf subsystem discovery via RPC 00:10:46.920 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:46.920 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.920 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.920 [ 00:10:46.920 { 00:10:46.920 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:46.920 "subtype": "Discovery", 00:10:46.920 "listen_addresses": [ 00:10:46.920 { 00:10:46.920 "trtype": "TCP", 00:10:46.920 "adrfam": "IPv4", 00:10:46.920 "traddr": "10.0.0.2", 00:10:46.921 "trsvcid": "4420" 00:10:46.921 } 00:10:46.921 ], 00:10:46.921 "allow_any_host": true, 00:10:46.921 "hosts": [] 00:10:46.921 }, 00:10:46.921 { 00:10:46.921 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.921 "subtype": "NVMe", 00:10:46.921 "listen_addresses": [ 00:10:46.921 { 00:10:46.921 "trtype": "TCP", 00:10:46.921 "adrfam": "IPv4", 00:10:46.921 "traddr": "10.0.0.2", 00:10:46.921 "trsvcid": "4420" 00:10:46.921 } 00:10:46.921 ], 00:10:46.921 "allow_any_host": true, 00:10:46.921 "hosts": [], 00:10:46.921 "serial_number": "SPDK00000000000001", 00:10:46.921 "model_number": "SPDK bdev Controller", 00:10:46.921 "max_namespaces": 32, 00:10:46.921 "min_cntlid": 1, 00:10:46.921 "max_cntlid": 65519, 00:10:46.921 "namespaces": [ 00:10:46.921 { 00:10:46.921 "nsid": 1, 00:10:46.921 "bdev_name": "Null1", 00:10:46.921 "name": "Null1", 00:10:46.921 "nguid": "A2C4BB8ADA3346B697CFB36C14E171FF", 00:10:46.921 "uuid": "a2c4bb8a-da33-46b6-97cf-b36c14e171ff" 00:10:46.921 } 00:10:46.921 ] 00:10:46.921 }, 00:10:46.921 { 00:10:46.921 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:46.921 "subtype": "NVMe", 00:10:46.921 "listen_addresses": [ 00:10:46.921 { 00:10:46.921 "trtype": "TCP", 00:10:46.921 "adrfam": "IPv4", 00:10:46.921 "traddr": "10.0.0.2", 00:10:46.921 "trsvcid": "4420" 00:10:46.921 } 00:10:46.921 ], 00:10:46.921 "allow_any_host": true, 00:10:46.921 "hosts": [], 00:10:46.921 "serial_number": "SPDK00000000000002", 00:10:46.921 "model_number": "SPDK bdev Controller", 00:10:46.921 "max_namespaces": 32, 00:10:46.921 "min_cntlid": 1, 00:10:46.921 "max_cntlid": 65519, 00:10:46.921 "namespaces": [ 00:10:46.921 { 00:10:46.921 "nsid": 1, 00:10:46.921 "bdev_name": "Null2", 00:10:46.921 "name": "Null2", 00:10:46.921 "nguid": "D5949E3F64CA410D9F05ADA87F2E4553", 00:10:46.921 "uuid": "d5949e3f-64ca-410d-9f05-ada87f2e4553" 00:10:46.921 } 00:10:46.921 ] 00:10:46.921 }, 00:10:46.921 { 00:10:46.921 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:46.921 "subtype": "NVMe", 00:10:46.921 "listen_addresses": [ 00:10:46.921 { 00:10:46.921 "trtype": "TCP", 00:10:46.921 "adrfam": "IPv4", 00:10:46.921 "traddr": "10.0.0.2", 00:10:46.921 "trsvcid": "4420" 00:10:46.921 } 00:10:46.921 ], 00:10:46.921 "allow_any_host": true, 00:10:46.921 "hosts": [], 00:10:46.921 "serial_number": "SPDK00000000000003", 00:10:46.921 "model_number": "SPDK bdev Controller", 00:10:46.921 "max_namespaces": 32, 00:10:46.921 "min_cntlid": 1, 00:10:46.921 "max_cntlid": 65519, 00:10:46.921 "namespaces": [ 00:10:46.921 { 00:10:46.921 "nsid": 1, 00:10:46.921 "bdev_name": "Null3", 00:10:46.921 "name": "Null3", 00:10:46.921 "nguid": "70562A0F655E4683B74BA77C4785545C", 00:10:46.921 "uuid": "70562a0f-655e-4683-b74b-a77c4785545c" 00:10:46.921 } 00:10:46.921 ] 00:10:46.921 }, 00:10:46.921 { 00:10:46.921 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:46.921 "subtype": "NVMe", 00:10:46.921 "listen_addresses": [ 00:10:46.921 { 00:10:46.921 "trtype": "TCP", 00:10:46.921 "adrfam": "IPv4", 00:10:46.921 "traddr": "10.0.0.2", 00:10:46.921 "trsvcid": "4420" 00:10:46.921 } 00:10:46.921 ], 00:10:46.921 "allow_any_host": true, 00:10:46.921 "hosts": [], 00:10:46.921 "serial_number": "SPDK00000000000004", 00:10:46.921 "model_number": "SPDK bdev Controller", 00:10:46.921 "max_namespaces": 32, 00:10:46.921 "min_cntlid": 1, 00:10:46.921 "max_cntlid": 65519, 00:10:46.921 "namespaces": [ 00:10:46.921 { 00:10:46.921 "nsid": 1, 00:10:46.921 "bdev_name": "Null4", 00:10:46.921 "name": "Null4", 00:10:46.921 "nguid": "9B9847995F15463199EA8EE371930668", 00:10:46.921 "uuid": "9b984799-5f15-4631-99ea-8ee371930668" 00:10:46.921 } 00:10:46.921 ] 00:10:46.921 } 00:10:46.921 ] 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.921 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:46.922 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:46.922 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.922 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.922 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.922 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:46.922 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.922 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.922 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.922 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:46.922 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.922 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:47.180 rmmod nvme_tcp 00:10:47.180 rmmod nvme_fabrics 00:10:47.180 rmmod nvme_keyring 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1319212 ']' 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1319212 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1319212 ']' 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1319212 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1319212 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1319212' 00:10:47.180 killing process with pid 1319212 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1319212 00:10:47.180 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1319212 00:10:47.439 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:47.439 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:47.439 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:47.439 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:47.439 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:47.439 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.439 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.439 23:16:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.372 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:49.372 00:10:49.372 real 0m5.367s 00:10:49.372 user 0m4.409s 00:10:49.372 sys 0m1.824s 00:10:49.372 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:49.372 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:49.372 ************************************ 00:10:49.372 END TEST nvmf_target_discovery 00:10:49.372 ************************************ 00:10:49.372 23:16:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:49.372 23:16:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:49.372 23:16:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:49.372 23:16:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:49.372 ************************************ 00:10:49.372 START TEST nvmf_referrals 00:10:49.372 ************************************ 00:10:49.372 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:49.630 * Looking for test storage... 00:10:49.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.630 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.630 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:49.630 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.630 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.630 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.630 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:10:49.631 23:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:51.535 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:51.535 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:51.535 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:51.535 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.535 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.536 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:51.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:10:51.794 00:10:51.794 --- 10.0.0.2 ping statistics --- 00:10:51.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.794 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:10:51.794 00:10:51.794 --- 10.0.0.1 ping statistics --- 00:10:51.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.794 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1321300 00:10:51.794 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:51.795 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1321300 00:10:51.795 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1321300 ']' 00:10:51.795 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.795 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:51.795 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.795 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:51.795 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.795 [2024-07-25 23:16:49.355318] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:10:51.795 [2024-07-25 23:16:49.355402] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.795 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.795 [2024-07-25 23:16:49.394436] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:51.795 [2024-07-25 23:16:49.426438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.053 [2024-07-25 23:16:49.520477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.053 [2024-07-25 23:16:49.520531] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.053 [2024-07-25 23:16:49.520548] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.053 [2024-07-25 23:16:49.520562] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.053 [2024-07-25 23:16:49.520574] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.053 [2024-07-25 23:16:49.520669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.053 [2024-07-25 23:16:49.520720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.053 [2024-07-25 23:16:49.520773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.053 [2024-07-25 23:16:49.520776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.053 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:52.053 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:10:52.053 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 [2024-07-25 23:16:49.674691] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 [2024-07-25 23:16:49.686897] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:52.054 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.313 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:52.313 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:52.313 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:52.313 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:52.313 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:52.313 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:52.313 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:52.313 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:52.313 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:52.313 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:52.313 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:52.313 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.313 23:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.313 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.313 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:52.313 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.313 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.313 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.313 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:52.313 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.313 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.313 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.313 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:52.313 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:52.313 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.313 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.313 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:52.571 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:52.829 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:52.829 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:52.829 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:52.829 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:52.829 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:52.829 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:52.829 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:52.829 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:52.829 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:52.829 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:52.829 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:52.829 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:52.829 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:52.829 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:52.829 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:52.830 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.830 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.830 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.830 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:52.830 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:52.830 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:52.830 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:52.830 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.830 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:52.830 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:53.088 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.088 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:53.088 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:53.088 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:53.088 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:53.088 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:53.088 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.088 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:53.088 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:53.088 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:53.088 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:53.088 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:53.088 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:53.088 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:53.088 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.088 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:53.346 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.347 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:53.347 23:16:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:53.605 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:53.605 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:53.605 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:53.605 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:53.605 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:53.605 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:10:53.605 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:53.605 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:10:53.605 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:53.605 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:53.605 rmmod nvme_tcp 00:10:53.605 rmmod nvme_fabrics 00:10:53.605 rmmod nvme_keyring 00:10:53.605 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:53.606 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:10:53.606 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:10:53.606 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1321300 ']' 00:10:53.606 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1321300 00:10:53.606 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1321300 ']' 00:10:53.606 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1321300 00:10:53.606 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:10:53.606 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:53.606 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1321300 00:10:53.606 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:53.606 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:53.606 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1321300' 00:10:53.606 killing process with pid 1321300 00:10:53.606 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1321300 00:10:53.606 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1321300 00:10:53.866 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:53.866 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:53.866 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:53.866 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:53.866 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:53.866 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.866 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.866 23:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.770 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:55.770 00:10:55.770 real 0m6.394s 00:10:55.770 user 0m8.917s 00:10:55.770 sys 0m2.069s 00:10:55.770 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:55.770 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.770 ************************************ 00:10:55.770 END TEST nvmf_referrals 00:10:55.770 ************************************ 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:56.029 ************************************ 00:10:56.029 START TEST nvmf_connect_disconnect 00:10:56.029 ************************************ 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:56.029 * Looking for test storage... 00:10:56.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.029 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:10:56.030 23:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:57.932 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:57.933 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:57.933 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:57.933 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:57.933 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.933 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:58.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:10:58.193 00:10:58.193 --- 10.0.0.2 ping statistics --- 00:10:58.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.193 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:10:58.193 00:10:58.193 --- 10.0.0.1 ping statistics --- 00:10:58.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.193 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1323592 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1323592 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1323592 ']' 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:58.193 23:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.193 [2024-07-25 23:16:55.809293] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:10:58.193 [2024-07-25 23:16:55.809381] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.193 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.193 [2024-07-25 23:16:55.848997] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:58.193 [2024-07-25 23:16:55.880783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.453 [2024-07-25 23:16:55.974535] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.453 [2024-07-25 23:16:55.974599] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.453 [2024-07-25 23:16:55.974625] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.453 [2024-07-25 23:16:55.974639] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.453 [2024-07-25 23:16:55.974651] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.453 [2024-07-25 23:16:55.974735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.453 [2024-07-25 23:16:55.974790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.453 [2024-07-25 23:16:55.974844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.453 [2024-07-25 23:16:55.974847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.453 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:58.453 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:10:58.453 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:58.453 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:58.453 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.453 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.453 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:58.453 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.453 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.454 [2024-07-25 23:16:56.140736] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.454 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.454 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:58.454 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.454 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.713 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.713 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:58.713 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:58.713 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.713 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.713 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.713 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:58.713 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.713 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.714 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.714 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.714 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.714 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:58.714 [2024-07-25 23:16:56.202511] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.714 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.714 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:10:58.714 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:10:58.714 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:10:58.714 23:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:01.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:49.973 rmmod nvme_tcp 00:14:49.973 rmmod nvme_fabrics 00:14:49.973 rmmod nvme_keyring 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1323592 ']' 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1323592 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1323592 ']' 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1323592 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1323592 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1323592' 00:14:49.973 killing process with pid 1323592 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1323592 00:14:49.973 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1323592 00:14:50.232 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:50.232 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:50.232 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:50.232 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:50.232 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:50.232 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.232 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.232 23:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.768 23:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:52.768 00:14:52.768 real 3m56.410s 00:14:52.768 user 14m59.979s 00:14:52.768 sys 0m34.918s 00:14:52.768 23:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:52.768 23:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:52.768 ************************************ 00:14:52.768 END TEST nvmf_connect_disconnect 00:14:52.768 ************************************ 00:14:52.768 23:20:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:52.768 23:20:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:52.768 23:20:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:52.768 23:20:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:52.768 ************************************ 00:14:52.768 START TEST nvmf_multitarget 00:14:52.768 ************************************ 00:14:52.768 23:20:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:52.768 * Looking for test storage... 00:14:52.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.768 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.768 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:52.768 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.768 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.768 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.768 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.768 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.768 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.768 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.768 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.768 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.768 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.768 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.768 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.768 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.768 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.768 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:14:52.769 23:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:54.668 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:54.668 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:14:54.668 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:54.668 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:54.668 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:54.668 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:54.668 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:54.668 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:14:54.668 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:54.668 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:14:54.668 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:14:54.668 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:14:54.668 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:14:54.668 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:14:54.668 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:54.669 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:54.669 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:54.669 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:54.669 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:54.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:14:54.669 00:14:54.669 --- 10.0.0.2 ping statistics --- 00:14:54.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.669 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:54.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:14:54.669 00:14:54.669 --- 10.0.0.1 ping statistics --- 00:14:54.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.669 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1354593 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1354593 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1354593 ']' 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:54.669 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:54.669 [2024-07-25 23:20:52.228228] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:14:54.669 [2024-07-25 23:20:52.228307] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.669 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.669 [2024-07-25 23:20:52.268177] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:54.669 [2024-07-25 23:20:52.300495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.927 [2024-07-25 23:20:52.394775] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.927 [2024-07-25 23:20:52.394829] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.927 [2024-07-25 23:20:52.394845] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.927 [2024-07-25 23:20:52.394858] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.928 [2024-07-25 23:20:52.394870] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.928 [2024-07-25 23:20:52.394948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.928 [2024-07-25 23:20:52.395005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.928 [2024-07-25 23:20:52.395076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.928 [2024-07-25 23:20:52.395080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.928 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:54.928 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:14:54.928 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:54.928 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:54.928 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:54.928 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.928 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:54.928 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:54.928 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:55.185 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:55.185 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:55.185 "nvmf_tgt_1" 00:14:55.185 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:55.185 "nvmf_tgt_2" 00:14:55.185 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:55.185 23:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:55.441 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:55.441 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:55.441 true 00:14:55.441 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:55.698 true 00:14:55.698 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:55.698 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:55.698 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:55.698 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:55.698 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:55.698 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:55.698 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:14:55.698 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:55.698 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:14:55.698 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:55.698 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:55.698 rmmod nvme_tcp 00:14:55.698 rmmod nvme_fabrics 00:14:55.698 rmmod nvme_keyring 00:14:55.965 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:55.965 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:14:55.965 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:14:55.965 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1354593 ']' 00:14:55.965 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1354593 00:14:55.965 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1354593 ']' 00:14:55.965 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1354593 00:14:55.965 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:14:55.965 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:55.965 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1354593 00:14:55.965 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:55.965 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:55.965 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1354593' 00:14:55.965 killing process with pid 1354593 00:14:55.965 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1354593 00:14:55.965 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1354593 00:14:56.227 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:56.227 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:56.227 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:56.227 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.227 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:56.227 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.227 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.227 23:20:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.132 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:58.132 00:14:58.132 real 0m5.748s 00:14:58.132 user 0m6.534s 00:14:58.132 sys 0m1.963s 00:14:58.132 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:58.132 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:58.132 ************************************ 00:14:58.132 END TEST nvmf_multitarget 00:14:58.132 ************************************ 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:58.133 ************************************ 00:14:58.133 START TEST nvmf_rpc 00:14:58.133 ************************************ 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:58.133 * Looking for test storage... 00:14:58.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:14:58.133 23:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:00.666 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:00.666 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:00.666 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:00.666 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.666 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:00.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:15:00.667 00:15:00.667 --- 10.0.0.2 ping statistics --- 00:15:00.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.667 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:00.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:15:00.667 00:15:00.667 --- 10.0.0.1 ping statistics --- 00:15:00.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.667 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:00.667 23:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1356694 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1356694 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1356694 ']' 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.667 [2024-07-25 23:20:58.064725] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:15:00.667 [2024-07-25 23:20:58.064793] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.667 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.667 [2024-07-25 23:20:58.101140] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:00.667 [2024-07-25 23:20:58.133781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:00.667 [2024-07-25 23:20:58.232422] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.667 [2024-07-25 23:20:58.232498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.667 [2024-07-25 23:20:58.232514] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.667 [2024-07-25 23:20:58.232527] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.667 [2024-07-25 23:20:58.232539] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.667 [2024-07-25 23:20:58.232611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.667 [2024-07-25 23:20:58.232648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.667 [2024-07-25 23:20:58.232698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.667 [2024-07-25 23:20:58.232700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.667 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:00.928 "tick_rate": 2700000000, 00:15:00.928 "poll_groups": [ 00:15:00.928 { 00:15:00.928 "name": "nvmf_tgt_poll_group_000", 00:15:00.928 "admin_qpairs": 0, 00:15:00.928 "io_qpairs": 0, 00:15:00.928 "current_admin_qpairs": 0, 00:15:00.928 "current_io_qpairs": 0, 00:15:00.928 "pending_bdev_io": 0, 00:15:00.928 "completed_nvme_io": 0, 00:15:00.928 "transports": [] 00:15:00.928 }, 00:15:00.928 { 00:15:00.928 "name": "nvmf_tgt_poll_group_001", 00:15:00.928 "admin_qpairs": 0, 00:15:00.928 "io_qpairs": 0, 00:15:00.928 "current_admin_qpairs": 0, 00:15:00.928 "current_io_qpairs": 0, 00:15:00.928 "pending_bdev_io": 0, 00:15:00.928 "completed_nvme_io": 0, 00:15:00.928 "transports": [] 00:15:00.928 }, 00:15:00.928 { 00:15:00.928 "name": "nvmf_tgt_poll_group_002", 00:15:00.928 "admin_qpairs": 0, 00:15:00.928 "io_qpairs": 0, 00:15:00.928 "current_admin_qpairs": 0, 00:15:00.928 "current_io_qpairs": 0, 00:15:00.928 "pending_bdev_io": 0, 00:15:00.928 "completed_nvme_io": 0, 00:15:00.928 "transports": [] 00:15:00.928 }, 00:15:00.928 { 00:15:00.928 "name": "nvmf_tgt_poll_group_003", 00:15:00.928 "admin_qpairs": 0, 00:15:00.928 "io_qpairs": 0, 00:15:00.928 "current_admin_qpairs": 0, 00:15:00.928 "current_io_qpairs": 0, 00:15:00.928 "pending_bdev_io": 0, 00:15:00.928 "completed_nvme_io": 0, 00:15:00.928 "transports": [] 00:15:00.928 } 00:15:00.928 ] 00:15:00.928 }' 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.928 [2024-07-25 23:20:58.489888] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:00.928 "tick_rate": 2700000000, 00:15:00.928 "poll_groups": [ 00:15:00.928 { 00:15:00.928 "name": "nvmf_tgt_poll_group_000", 00:15:00.928 "admin_qpairs": 0, 00:15:00.928 "io_qpairs": 0, 00:15:00.928 "current_admin_qpairs": 0, 00:15:00.928 "current_io_qpairs": 0, 00:15:00.928 "pending_bdev_io": 0, 00:15:00.928 "completed_nvme_io": 0, 00:15:00.928 "transports": [ 00:15:00.928 { 00:15:00.928 "trtype": "TCP" 00:15:00.928 } 00:15:00.928 ] 00:15:00.928 }, 00:15:00.928 { 00:15:00.928 "name": "nvmf_tgt_poll_group_001", 00:15:00.928 "admin_qpairs": 0, 00:15:00.928 "io_qpairs": 0, 00:15:00.928 "current_admin_qpairs": 0, 00:15:00.928 "current_io_qpairs": 0, 00:15:00.928 "pending_bdev_io": 0, 00:15:00.928 "completed_nvme_io": 0, 00:15:00.928 "transports": [ 00:15:00.928 { 00:15:00.928 "trtype": "TCP" 00:15:00.928 } 00:15:00.928 ] 00:15:00.928 }, 00:15:00.928 { 00:15:00.928 "name": "nvmf_tgt_poll_group_002", 00:15:00.928 "admin_qpairs": 0, 00:15:00.928 "io_qpairs": 0, 00:15:00.928 "current_admin_qpairs": 0, 00:15:00.928 "current_io_qpairs": 0, 00:15:00.928 "pending_bdev_io": 0, 00:15:00.928 "completed_nvme_io": 0, 00:15:00.928 "transports": [ 00:15:00.928 { 00:15:00.928 "trtype": "TCP" 00:15:00.928 } 00:15:00.928 ] 00:15:00.928 }, 00:15:00.928 { 00:15:00.928 "name": "nvmf_tgt_poll_group_003", 00:15:00.928 "admin_qpairs": 0, 00:15:00.928 "io_qpairs": 0, 00:15:00.928 "current_admin_qpairs": 0, 00:15:00.928 "current_io_qpairs": 0, 00:15:00.928 "pending_bdev_io": 0, 00:15:00.928 "completed_nvme_io": 0, 00:15:00.928 "transports": [ 00:15:00.928 { 00:15:00.928 "trtype": "TCP" 00:15:00.928 } 00:15:00.928 ] 00:15:00.928 } 00:15:00.928 ] 00:15:00.928 }' 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:00.928 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.929 Malloc1 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.929 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.929 [2024-07-25 23:20:58.651144] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:01.188 [2024-07-25 23:20:58.673506] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:01.188 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:01.188 could not add new controller: failed to write to nvme-fabrics device 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.188 23:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:01.756 23:20:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:01.756 23:20:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:01.756 23:20:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:01.756 23:20:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:01.756 23:20:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:03.657 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:03.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.915 [2024-07-25 23:21:01.485092] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:03.915 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:03.915 could not add new controller: failed to write to nvme-fabrics device 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.915 23:21:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:04.480 23:21:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:04.480 23:21:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:04.480 23:21:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:04.480 23:21:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:04.480 23:21:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:07.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.065 [2024-07-25 23:21:04.298078] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.065 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:07.324 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:07.324 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:07.324 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:07.324 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:07.324 23:21:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:09.222 23:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:09.222 23:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:09.222 23:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:09.222 23:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:09.222 23:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:09.222 23:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:09.222 23:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:09.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.480 23:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:09.480 23:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:09.480 23:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:09.480 23:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.480 23:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:09.480 23:21:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.480 [2024-07-25 23:21:07.030639] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.480 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:09.481 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.481 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.481 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.481 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:10.046 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:10.046 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:10.046 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:10.046 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:10.046 23:21:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:11.943 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:11.943 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:11.943 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.943 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:11.943 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.943 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:11.943 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:12.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.201 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:12.201 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:12.201 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:12.201 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:12.201 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:12.201 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:12.201 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:12.201 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:12.201 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.201 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.201 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.202 [2024-07-25 23:21:09.753330] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.202 23:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:12.768 23:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:12.768 23:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:12.768 23:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:12.768 23:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:12.768 23:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:15.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.295 [2024-07-25 23:21:12.561111] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.295 23:21:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:15.553 23:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:15.553 23:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:15.553 23:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.553 23:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:15.553 23:21:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:18.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.079 [2024-07-25 23:21:15.288904] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.079 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:18.337 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:18.337 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:18.337 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:18.337 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:18.337 23:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:20.233 23:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:20.233 23:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:20.233 23:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:20.233 23:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:20.233 23:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:20.233 23:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:20.233 23:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:20.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.491 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:20.491 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:20.491 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:20.491 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.491 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:20.491 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.491 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:20.491 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:20.491 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.491 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.491 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 [2024-07-25 23:21:18.066752] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 [2024-07-25 23:21:18.114807] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 [2024-07-25 23:21:18.162972] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.492 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.493 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.493 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.493 [2024-07-25 23:21:18.211150] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.493 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.493 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.493 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.493 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.750 [2024-07-25 23:21:18.259315] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:20.750 "tick_rate": 2700000000, 00:15:20.750 "poll_groups": [ 00:15:20.750 { 00:15:20.750 "name": "nvmf_tgt_poll_group_000", 00:15:20.750 "admin_qpairs": 2, 00:15:20.750 "io_qpairs": 84, 00:15:20.750 "current_admin_qpairs": 0, 00:15:20.750 "current_io_qpairs": 0, 00:15:20.750 "pending_bdev_io": 0, 00:15:20.750 "completed_nvme_io": 187, 00:15:20.750 "transports": [ 00:15:20.750 { 00:15:20.750 "trtype": "TCP" 00:15:20.750 } 00:15:20.750 ] 00:15:20.750 }, 00:15:20.750 { 00:15:20.750 "name": "nvmf_tgt_poll_group_001", 00:15:20.750 "admin_qpairs": 2, 00:15:20.750 "io_qpairs": 84, 00:15:20.750 "current_admin_qpairs": 0, 00:15:20.750 "current_io_qpairs": 0, 00:15:20.750 "pending_bdev_io": 0, 00:15:20.750 "completed_nvme_io": 183, 00:15:20.750 "transports": [ 00:15:20.750 { 00:15:20.750 "trtype": "TCP" 00:15:20.750 } 00:15:20.750 ] 00:15:20.750 }, 00:15:20.750 { 00:15:20.750 "name": "nvmf_tgt_poll_group_002", 00:15:20.750 "admin_qpairs": 1, 00:15:20.750 "io_qpairs": 84, 00:15:20.750 "current_admin_qpairs": 0, 00:15:20.750 "current_io_qpairs": 0, 00:15:20.750 "pending_bdev_io": 0, 00:15:20.750 "completed_nvme_io": 229, 00:15:20.750 "transports": [ 00:15:20.750 { 00:15:20.750 "trtype": "TCP" 00:15:20.750 } 00:15:20.750 ] 00:15:20.750 }, 00:15:20.750 { 00:15:20.750 "name": "nvmf_tgt_poll_group_003", 00:15:20.750 "admin_qpairs": 2, 00:15:20.750 "io_qpairs": 84, 00:15:20.750 "current_admin_qpairs": 0, 00:15:20.750 "current_io_qpairs": 0, 00:15:20.750 "pending_bdev_io": 0, 00:15:20.750 "completed_nvme_io": 87, 00:15:20.750 "transports": [ 00:15:20.750 { 00:15:20.750 "trtype": "TCP" 00:15:20.750 } 00:15:20.750 ] 00:15:20.750 } 00:15:20.750 ] 00:15:20.750 }' 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:20.750 rmmod nvme_tcp 00:15:20.750 rmmod nvme_fabrics 00:15:20.750 rmmod nvme_keyring 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1356694 ']' 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1356694 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1356694 ']' 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1356694 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.750 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1356694 00:15:21.010 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:21.010 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:21.010 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1356694' 00:15:21.010 killing process with pid 1356694 00:15:21.010 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1356694 00:15:21.010 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1356694 00:15:21.010 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:21.010 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:21.010 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:21.010 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:21.010 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:21.010 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.010 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.010 23:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:23.542 00:15:23.542 real 0m24.979s 00:15:23.542 user 1m21.110s 00:15:23.542 sys 0m4.073s 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:23.542 ************************************ 00:15:23.542 END TEST nvmf_rpc 00:15:23.542 ************************************ 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:23.542 ************************************ 00:15:23.542 START TEST nvmf_invalid 00:15:23.542 ************************************ 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:23.542 * Looking for test storage... 00:15:23.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:23.542 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:23.543 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.543 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:23.543 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:23.543 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:23.543 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:23.543 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:23.543 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.543 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:23.543 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:23.543 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:23.543 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.543 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.543 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.543 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:23.543 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:23.543 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:15:23.543 23:21:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:25.442 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:25.442 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:25.442 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:25.442 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:25.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:15:25.442 00:15:25.442 --- 10.0.0.2 ping statistics --- 00:15:25.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.442 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:25.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:15:25.442 00:15:25.442 --- 10.0.0.1 ping statistics --- 00:15:25.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.442 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:25.442 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:25.443 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:25.443 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:25.443 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:25.443 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1361792 00:15:25.443 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:25.443 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1361792 00:15:25.443 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1361792 ']' 00:15:25.443 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.443 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.443 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.443 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.443 23:21:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:25.443 [2024-07-25 23:21:22.982869] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:15:25.443 [2024-07-25 23:21:22.982950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.443 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.443 [2024-07-25 23:21:23.020912] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:25.443 [2024-07-25 23:21:23.052944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.443 [2024-07-25 23:21:23.149470] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.443 [2024-07-25 23:21:23.149535] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.443 [2024-07-25 23:21:23.149552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.443 [2024-07-25 23:21:23.149565] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.443 [2024-07-25 23:21:23.149576] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.443 [2024-07-25 23:21:23.151084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.443 [2024-07-25 23:21:23.151133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.443 [2024-07-25 23:21:23.151182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.443 [2024-07-25 23:21:23.151185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.700 23:21:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.700 23:21:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:15:25.700 23:21:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:25.700 23:21:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:25.700 23:21:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:25.700 23:21:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.700 23:21:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:25.700 23:21:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode4868 00:15:25.957 [2024-07-25 23:21:23.517081] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:25.957 23:21:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:25.957 { 00:15:25.957 "nqn": "nqn.2016-06.io.spdk:cnode4868", 00:15:25.957 "tgt_name": "foobar", 00:15:25.957 "method": "nvmf_create_subsystem", 00:15:25.957 "req_id": 1 00:15:25.957 } 00:15:25.958 Got JSON-RPC error response 00:15:25.958 response: 00:15:25.958 { 00:15:25.958 "code": -32603, 00:15:25.958 "message": "Unable to find target foobar" 00:15:25.958 }' 00:15:25.958 23:21:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:25.958 { 00:15:25.958 "nqn": "nqn.2016-06.io.spdk:cnode4868", 00:15:25.958 "tgt_name": "foobar", 00:15:25.958 "method": "nvmf_create_subsystem", 00:15:25.958 "req_id": 1 00:15:25.958 } 00:15:25.958 Got JSON-RPC error response 00:15:25.958 response: 00:15:25.958 { 00:15:25.958 "code": -32603, 00:15:25.958 "message": "Unable to find target foobar" 00:15:25.958 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:25.958 23:21:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:25.958 23:21:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode20638 00:15:26.214 [2024-07-25 23:21:23.769952] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20638: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:26.214 23:21:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:26.214 { 00:15:26.214 "nqn": "nqn.2016-06.io.spdk:cnode20638", 00:15:26.214 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:26.214 "method": "nvmf_create_subsystem", 00:15:26.214 "req_id": 1 00:15:26.214 } 00:15:26.214 Got JSON-RPC error response 00:15:26.214 response: 00:15:26.214 { 00:15:26.214 "code": -32602, 00:15:26.214 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:26.214 }' 00:15:26.214 23:21:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:26.214 { 00:15:26.214 "nqn": "nqn.2016-06.io.spdk:cnode20638", 00:15:26.214 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:26.214 "method": "nvmf_create_subsystem", 00:15:26.214 "req_id": 1 00:15:26.214 } 00:15:26.214 Got JSON-RPC error response 00:15:26.214 response: 00:15:26.214 { 00:15:26.214 "code": -32602, 00:15:26.214 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:26.214 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:26.214 23:21:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:26.214 23:21:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12658 00:15:26.470 [2024-07-25 23:21:24.010743] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12658: invalid model number 'SPDK_Controller' 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:26.470 { 00:15:26.470 "nqn": "nqn.2016-06.io.spdk:cnode12658", 00:15:26.470 "model_number": "SPDK_Controller\u001f", 00:15:26.470 "method": "nvmf_create_subsystem", 00:15:26.470 "req_id": 1 00:15:26.470 } 00:15:26.470 Got JSON-RPC error response 00:15:26.470 response: 00:15:26.470 { 00:15:26.470 "code": -32602, 00:15:26.470 "message": "Invalid MN SPDK_Controller\u001f" 00:15:26.470 }' 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:26.470 { 00:15:26.470 "nqn": "nqn.2016-06.io.spdk:cnode12658", 00:15:26.470 "model_number": "SPDK_Controller\u001f", 00:15:26.470 "method": "nvmf_create_subsystem", 00:15:26.470 "req_id": 1 00:15:26.470 } 00:15:26.470 Got JSON-RPC error response 00:15:26.470 response: 00:15:26.470 { 00:15:26.470 "code": -32602, 00:15:26.470 "message": "Invalid MN SPDK_Controller\u001f" 00:15:26.470 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:26.470 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ U == \- ]] 00:15:26.471 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'UFOQ),&JlpM,Qyg*,>rV.' 00:15:26.472 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'UFOQ),&JlpM,Qyg*,>rV.' nqn.2016-06.io.spdk:cnode6789 00:15:26.729 [2024-07-25 23:21:24.351891] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6789: invalid serial number 'UFOQ),&JlpM,Qyg*,>rV.' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:26.729 { 00:15:26.729 "nqn": "nqn.2016-06.io.spdk:cnode6789", 00:15:26.729 "serial_number": "UFOQ),&JlpM,Qyg*,>rV.", 00:15:26.729 "method": "nvmf_create_subsystem", 00:15:26.729 "req_id": 1 00:15:26.729 } 00:15:26.729 Got JSON-RPC error response 00:15:26.729 response: 00:15:26.729 { 00:15:26.729 "code": -32602, 00:15:26.729 "message": "Invalid SN UFOQ),&JlpM,Qyg*,>rV." 00:15:26.729 }' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:26.729 { 00:15:26.729 "nqn": "nqn.2016-06.io.spdk:cnode6789", 00:15:26.729 "serial_number": "UFOQ),&JlpM,Qyg*,>rV.", 00:15:26.729 "method": "nvmf_create_subsystem", 00:15:26.729 "req_id": 1 00:15:26.729 } 00:15:26.729 Got JSON-RPC error response 00:15:26.729 response: 00:15:26.729 { 00:15:26.729 "code": -32602, 00:15:26.729 "message": "Invalid SN UFOQ),&JlpM,Qyg*,>rV." 00:15:26.729 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.729 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.730 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:26.987 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ l == \- ]] 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'lG?]TH->j:H>i1na/om>iO\o({VmI"|C"x~.qA,B' 00:15:26.988 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'lG?]TH->j:H>i1na/om>iO\o({VmI"|C"x~.qA,B' nqn.2016-06.io.spdk:cnode8192 00:15:27.246 [2024-07-25 23:21:24.761279] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8192: invalid model number 'lG?]TH->j:H>i1na/om>iO\o({VmI"|C"x~.qA,B' 00:15:27.246 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:27.246 { 00:15:27.246 "nqn": "nqn.2016-06.io.spdk:cnode8192", 00:15:27.246 "model_number": "lG?]TH->j:H>i1na/o\u007fm>iO\\o({VmI\"|C\"x~.qA,B", 00:15:27.246 "method": "nvmf_create_subsystem", 00:15:27.246 "req_id": 1 00:15:27.246 } 00:15:27.246 Got JSON-RPC error response 00:15:27.246 response: 00:15:27.246 { 00:15:27.246 "code": -32602, 00:15:27.246 "message": "Invalid MN lG?]TH->j:H>i1na/o\u007fm>iO\\o({VmI\"|C\"x~.qA,B" 00:15:27.246 }' 00:15:27.246 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:27.246 { 00:15:27.246 "nqn": "nqn.2016-06.io.spdk:cnode8192", 00:15:27.246 "model_number": "lG?]TH->j:H>i1na/o\u007fm>iO\\o({VmI\"|C\"x~.qA,B", 00:15:27.246 "method": "nvmf_create_subsystem", 00:15:27.246 "req_id": 1 00:15:27.246 } 00:15:27.246 Got JSON-RPC error response 00:15:27.246 response: 00:15:27.246 { 00:15:27.246 "code": -32602, 00:15:27.246 "message": "Invalid MN lG?]TH->j:H>i1na/o\u007fm>iO\\o({VmI\"|C\"x~.qA,B" 00:15:27.246 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:27.246 23:21:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:27.503 [2024-07-25 23:21:25.006182] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.503 23:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:27.776 23:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:27.776 23:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:27.776 23:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:27.776 23:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:27.776 23:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:28.056 [2024-07-25 23:21:25.515841] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:28.056 23:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:28.056 { 00:15:28.056 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:28.056 "listen_address": { 00:15:28.056 "trtype": "tcp", 00:15:28.056 "traddr": "", 00:15:28.056 "trsvcid": "4421" 00:15:28.056 }, 00:15:28.056 "method": "nvmf_subsystem_remove_listener", 00:15:28.056 "req_id": 1 00:15:28.056 } 00:15:28.056 Got JSON-RPC error response 00:15:28.056 response: 00:15:28.056 { 00:15:28.056 "code": -32602, 00:15:28.056 "message": "Invalid parameters" 00:15:28.056 }' 00:15:28.056 23:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:28.056 { 00:15:28.056 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:28.056 "listen_address": { 00:15:28.056 "trtype": "tcp", 00:15:28.056 "traddr": "", 00:15:28.056 "trsvcid": "4421" 00:15:28.056 }, 00:15:28.056 "method": "nvmf_subsystem_remove_listener", 00:15:28.056 "req_id": 1 00:15:28.056 } 00:15:28.056 Got JSON-RPC error response 00:15:28.056 response: 00:15:28.056 { 00:15:28.056 "code": -32602, 00:15:28.056 "message": "Invalid parameters" 00:15:28.056 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:28.056 23:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32762 -i 0 00:15:28.056 [2024-07-25 23:21:25.764628] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32762: invalid cntlid range [0-65519] 00:15:28.314 23:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:28.314 { 00:15:28.314 "nqn": "nqn.2016-06.io.spdk:cnode32762", 00:15:28.314 "min_cntlid": 0, 00:15:28.314 "method": "nvmf_create_subsystem", 00:15:28.314 "req_id": 1 00:15:28.314 } 00:15:28.314 Got JSON-RPC error response 00:15:28.314 response: 00:15:28.314 { 00:15:28.314 "code": -32602, 00:15:28.314 "message": "Invalid cntlid range [0-65519]" 00:15:28.314 }' 00:15:28.314 23:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:28.314 { 00:15:28.314 "nqn": "nqn.2016-06.io.spdk:cnode32762", 00:15:28.314 "min_cntlid": 0, 00:15:28.314 "method": "nvmf_create_subsystem", 00:15:28.314 "req_id": 1 00:15:28.314 } 00:15:28.314 Got JSON-RPC error response 00:15:28.314 response: 00:15:28.314 { 00:15:28.314 "code": -32602, 00:15:28.314 "message": "Invalid cntlid range [0-65519]" 00:15:28.314 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:28.314 23:21:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30542 -i 65520 00:15:28.314 [2024-07-25 23:21:26.017465] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30542: invalid cntlid range [65520-65519] 00:15:28.314 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:28.314 { 00:15:28.314 "nqn": "nqn.2016-06.io.spdk:cnode30542", 00:15:28.314 "min_cntlid": 65520, 00:15:28.314 "method": "nvmf_create_subsystem", 00:15:28.314 "req_id": 1 00:15:28.314 } 00:15:28.314 Got JSON-RPC error response 00:15:28.314 response: 00:15:28.314 { 00:15:28.314 "code": -32602, 00:15:28.314 "message": "Invalid cntlid range [65520-65519]" 00:15:28.314 }' 00:15:28.314 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:28.314 { 00:15:28.314 "nqn": "nqn.2016-06.io.spdk:cnode30542", 00:15:28.314 "min_cntlid": 65520, 00:15:28.314 "method": "nvmf_create_subsystem", 00:15:28.314 "req_id": 1 00:15:28.314 } 00:15:28.314 Got JSON-RPC error response 00:15:28.314 response: 00:15:28.314 { 00:15:28.314 "code": -32602, 00:15:28.314 "message": "Invalid cntlid range [65520-65519]" 00:15:28.314 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:28.571 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32385 -I 0 00:15:28.571 [2024-07-25 23:21:26.266288] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32385: invalid cntlid range [1-0] 00:15:28.571 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:28.571 { 00:15:28.571 "nqn": "nqn.2016-06.io.spdk:cnode32385", 00:15:28.571 "max_cntlid": 0, 00:15:28.571 "method": "nvmf_create_subsystem", 00:15:28.571 "req_id": 1 00:15:28.571 } 00:15:28.571 Got JSON-RPC error response 00:15:28.571 response: 00:15:28.571 { 00:15:28.571 "code": -32602, 00:15:28.571 "message": "Invalid cntlid range [1-0]" 00:15:28.571 }' 00:15:28.571 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:28.571 { 00:15:28.571 "nqn": "nqn.2016-06.io.spdk:cnode32385", 00:15:28.571 "max_cntlid": 0, 00:15:28.571 "method": "nvmf_create_subsystem", 00:15:28.571 "req_id": 1 00:15:28.571 } 00:15:28.571 Got JSON-RPC error response 00:15:28.571 response: 00:15:28.571 { 00:15:28.571 "code": -32602, 00:15:28.571 "message": "Invalid cntlid range [1-0]" 00:15:28.571 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:28.571 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15610 -I 65520 00:15:28.828 [2024-07-25 23:21:26.519163] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15610: invalid cntlid range [1-65520] 00:15:28.829 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:28.829 { 00:15:28.829 "nqn": "nqn.2016-06.io.spdk:cnode15610", 00:15:28.829 "max_cntlid": 65520, 00:15:28.829 "method": "nvmf_create_subsystem", 00:15:28.829 "req_id": 1 00:15:28.829 } 00:15:28.829 Got JSON-RPC error response 00:15:28.829 response: 00:15:28.829 { 00:15:28.829 "code": -32602, 00:15:28.829 "message": "Invalid cntlid range [1-65520]" 00:15:28.829 }' 00:15:28.829 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:28.829 { 00:15:28.829 "nqn": "nqn.2016-06.io.spdk:cnode15610", 00:15:28.829 "max_cntlid": 65520, 00:15:28.829 "method": "nvmf_create_subsystem", 00:15:28.829 "req_id": 1 00:15:28.829 } 00:15:28.829 Got JSON-RPC error response 00:15:28.829 response: 00:15:28.829 { 00:15:28.829 "code": -32602, 00:15:28.829 "message": "Invalid cntlid range [1-65520]" 00:15:28.829 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:28.829 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8269 -i 6 -I 5 00:15:29.086 [2024-07-25 23:21:26.759948] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8269: invalid cntlid range [6-5] 00:15:29.086 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:29.086 { 00:15:29.086 "nqn": "nqn.2016-06.io.spdk:cnode8269", 00:15:29.086 "min_cntlid": 6, 00:15:29.086 "max_cntlid": 5, 00:15:29.086 "method": "nvmf_create_subsystem", 00:15:29.086 "req_id": 1 00:15:29.086 } 00:15:29.086 Got JSON-RPC error response 00:15:29.086 response: 00:15:29.086 { 00:15:29.086 "code": -32602, 00:15:29.086 "message": "Invalid cntlid range [6-5]" 00:15:29.086 }' 00:15:29.086 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:29.086 { 00:15:29.086 "nqn": "nqn.2016-06.io.spdk:cnode8269", 00:15:29.086 "min_cntlid": 6, 00:15:29.086 "max_cntlid": 5, 00:15:29.086 "method": "nvmf_create_subsystem", 00:15:29.086 "req_id": 1 00:15:29.086 } 00:15:29.086 Got JSON-RPC error response 00:15:29.086 response: 00:15:29.086 { 00:15:29.086 "code": -32602, 00:15:29.086 "message": "Invalid cntlid range [6-5]" 00:15:29.086 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:29.086 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:29.343 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:29.343 { 00:15:29.343 "name": "foobar", 00:15:29.343 "method": "nvmf_delete_target", 00:15:29.343 "req_id": 1 00:15:29.343 } 00:15:29.343 Got JSON-RPC error response 00:15:29.343 response: 00:15:29.343 { 00:15:29.343 "code": -32602, 00:15:29.343 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:29.343 }' 00:15:29.343 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:29.343 { 00:15:29.343 "name": "foobar", 00:15:29.343 "method": "nvmf_delete_target", 00:15:29.343 "req_id": 1 00:15:29.343 } 00:15:29.343 Got JSON-RPC error response 00:15:29.343 response: 00:15:29.343 { 00:15:29.343 "code": -32602, 00:15:29.343 "message": "The specified target doesn't exist, cannot delete it." 00:15:29.343 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:29.343 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:29.343 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:29.343 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.343 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:15:29.343 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.344 rmmod nvme_tcp 00:15:29.344 rmmod nvme_fabrics 00:15:29.344 rmmod nvme_keyring 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1361792 ']' 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1361792 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1361792 ']' 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1361792 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1361792 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1361792' 00:15:29.344 killing process with pid 1361792 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1361792 00:15:29.344 23:21:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1361792 00:15:29.602 23:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:29.602 23:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:29.602 23:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:29.602 23:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.602 23:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:29.602 23:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.603 23:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.603 23:21:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.130 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:32.130 00:15:32.130 real 0m8.433s 00:15:32.130 user 0m19.686s 00:15:32.130 sys 0m2.335s 00:15:32.130 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:32.130 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:32.130 ************************************ 00:15:32.130 END TEST nvmf_invalid 00:15:32.130 ************************************ 00:15:32.130 23:21:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:32.130 23:21:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:32.130 23:21:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:32.130 23:21:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:32.130 ************************************ 00:15:32.130 START TEST nvmf_connect_stress 00:15:32.130 ************************************ 00:15:32.130 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:32.130 * Looking for test storage... 00:15:32.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.130 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.130 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:32.131 23:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:34.033 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:34.033 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.033 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:34.033 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:34.034 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:34.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:15:34.034 00:15:34.034 --- 10.0.0.2 ping statistics --- 00:15:34.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.034 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:34.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:15:34.034 00:15:34.034 --- 10.0.0.1 ping statistics --- 00:15:34.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.034 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1364416 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1364416 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1364416 ']' 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.034 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.034 [2024-07-25 23:21:31.570966] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:15:34.034 [2024-07-25 23:21:31.571048] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.034 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.034 [2024-07-25 23:21:31.609237] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:34.034 [2024-07-25 23:21:31.641154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:34.034 [2024-07-25 23:21:31.738649] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.035 [2024-07-25 23:21:31.738709] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.035 [2024-07-25 23:21:31.738725] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.035 [2024-07-25 23:21:31.738739] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.035 [2024-07-25 23:21:31.738751] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.035 [2024-07-25 23:21:31.738844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.035 [2024-07-25 23:21:31.738895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.035 [2024-07-25 23:21:31.738898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.293 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:34.293 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:15:34.293 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:34.293 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:34.293 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.293 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.293 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:34.293 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.293 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.293 [2024-07-25 23:21:31.894873] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.293 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.293 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:34.293 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.293 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.293 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.294 [2024-07-25 23:21:31.935165] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.294 NULL1 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1364448 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.294 23:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.859 23:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.859 23:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:34.859 23:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.859 23:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.859 23:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.116 23:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.116 23:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:35.116 23:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.116 23:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.116 23:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.373 23:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.373 23:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:35.373 23:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.373 23:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.373 23:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.630 23:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.630 23:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:35.630 23:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.630 23:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.631 23:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.886 23:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.886 23:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:35.886 23:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.886 23:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.886 23:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.451 23:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.451 23:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:36.451 23:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.451 23:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.451 23:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.707 23:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.707 23:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:36.707 23:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.707 23:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.707 23:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.963 23:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.963 23:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:36.963 23:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.963 23:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.964 23:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.221 23:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.221 23:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:37.221 23:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.221 23:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.221 23:21:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.785 23:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.785 23:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:37.785 23:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.785 23:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.785 23:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.042 23:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.043 23:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:38.043 23:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.043 23:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.043 23:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.300 23:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.300 23:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:38.300 23:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.300 23:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.300 23:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.557 23:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.557 23:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:38.557 23:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.557 23:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.557 23:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.814 23:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.814 23:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:38.814 23:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.814 23:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.814 23:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.379 23:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.379 23:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:39.379 23:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.379 23:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.379 23:21:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.636 23:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.636 23:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:39.636 23:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.636 23:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.636 23:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.894 23:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.894 23:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:39.894 23:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.894 23:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.894 23:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.151 23:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.151 23:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:40.151 23:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.151 23:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.151 23:21:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.408 23:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.408 23:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:40.408 23:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.408 23:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.408 23:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.973 23:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.973 23:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:40.973 23:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.973 23:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.973 23:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.230 23:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.230 23:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:41.230 23:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.230 23:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.230 23:21:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.487 23:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.487 23:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:41.487 23:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.487 23:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.487 23:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.745 23:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.745 23:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:41.745 23:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.745 23:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.745 23:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.002 23:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.002 23:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:42.002 23:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.002 23:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.002 23:21:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.566 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.566 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:42.566 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.566 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.566 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.823 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.823 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:42.823 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.823 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.823 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.080 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.080 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:43.080 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.080 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.080 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.337 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.337 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:43.337 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.337 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.337 23:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.594 23:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.594 23:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:43.594 23:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.594 23:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.594 23:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.157 23:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.157 23:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:44.157 23:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.157 23:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.157 23:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.415 23:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.415 23:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:44.415 23:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.415 23:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.415 23:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:44.415 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1364448 00:15:44.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1364448) - No such process 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1364448 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.672 rmmod nvme_tcp 00:15:44.672 rmmod nvme_fabrics 00:15:44.672 rmmod nvme_keyring 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1364416 ']' 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1364416 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1364416 ']' 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1364416 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1364416 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1364416' 00:15:44.672 killing process with pid 1364416 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1364416 00:15:44.672 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1364416 00:15:44.931 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:44.931 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:44.931 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:44.931 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.931 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.931 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.931 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:44.931 23:21:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:47.512 00:15:47.512 real 0m15.354s 00:15:47.512 user 0m38.282s 00:15:47.512 sys 0m6.014s 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:47.512 ************************************ 00:15:47.512 END TEST nvmf_connect_stress 00:15:47.512 ************************************ 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:47.512 ************************************ 00:15:47.512 START TEST nvmf_fused_ordering 00:15:47.512 ************************************ 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:47.512 * Looking for test storage... 00:15:47.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.512 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:47.513 23:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:49.414 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:49.414 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:49.414 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:49.414 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.414 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:49.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:15:49.415 00:15:49.415 --- 10.0.0.2 ping statistics --- 00:15:49.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.415 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:15:49.415 00:15:49.415 --- 10.0.0.1 ping statistics --- 00:15:49.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.415 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1367623 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1367623 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1367623 ']' 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:49.415 23:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.415 [2024-07-25 23:21:46.978525] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:15:49.415 [2024-07-25 23:21:46.978626] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.415 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.415 [2024-07-25 23:21:47.016526] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:49.415 [2024-07-25 23:21:47.048437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.674 [2024-07-25 23:21:47.140626] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.674 [2024-07-25 23:21:47.140679] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.674 [2024-07-25 23:21:47.140705] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.674 [2024-07-25 23:21:47.140719] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.674 [2024-07-25 23:21:47.140733] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.674 [2024-07-25 23:21:47.140769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.674 [2024-07-25 23:21:47.293608] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.674 [2024-07-25 23:21:47.309829] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.674 NULL1 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.674 23:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:49.674 [2024-07-25 23:21:47.355567] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:15:49.674 [2024-07-25 23:21:47.355612] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1367726 ] 00:15:49.674 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.674 [2024-07-25 23:21:47.392735] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:50.240 Attached to nqn.2016-06.io.spdk:cnode1 00:15:50.240 Namespace ID: 1 size: 1GB 00:15:50.240 fused_ordering(0) 00:15:50.240 fused_ordering(1) 00:15:50.240 fused_ordering(2) 00:15:50.240 fused_ordering(3) 00:15:50.240 fused_ordering(4) 00:15:50.240 fused_ordering(5) 00:15:50.240 fused_ordering(6) 00:15:50.240 fused_ordering(7) 00:15:50.240 fused_ordering(8) 00:15:50.240 fused_ordering(9) 00:15:50.240 fused_ordering(10) 00:15:50.240 fused_ordering(11) 00:15:50.240 fused_ordering(12) 00:15:50.240 fused_ordering(13) 00:15:50.240 fused_ordering(14) 00:15:50.240 fused_ordering(15) 00:15:50.240 fused_ordering(16) 00:15:50.240 fused_ordering(17) 00:15:50.240 fused_ordering(18) 00:15:50.240 fused_ordering(19) 00:15:50.240 fused_ordering(20) 00:15:50.240 fused_ordering(21) 00:15:50.240 fused_ordering(22) 00:15:50.240 fused_ordering(23) 00:15:50.240 fused_ordering(24) 00:15:50.240 fused_ordering(25) 00:15:50.240 fused_ordering(26) 00:15:50.240 fused_ordering(27) 00:15:50.240 fused_ordering(28) 00:15:50.240 fused_ordering(29) 00:15:50.240 fused_ordering(30) 00:15:50.240 fused_ordering(31) 00:15:50.240 fused_ordering(32) 00:15:50.240 fused_ordering(33) 00:15:50.240 fused_ordering(34) 00:15:50.240 fused_ordering(35) 00:15:50.240 fused_ordering(36) 00:15:50.240 fused_ordering(37) 00:15:50.240 fused_ordering(38) 00:15:50.240 fused_ordering(39) 00:15:50.240 fused_ordering(40) 00:15:50.240 fused_ordering(41) 00:15:50.240 fused_ordering(42) 00:15:50.240 fused_ordering(43) 00:15:50.240 fused_ordering(44) 00:15:50.240 fused_ordering(45) 00:15:50.240 fused_ordering(46) 00:15:50.240 fused_ordering(47) 00:15:50.240 fused_ordering(48) 00:15:50.240 fused_ordering(49) 00:15:50.240 fused_ordering(50) 00:15:50.240 fused_ordering(51) 00:15:50.240 fused_ordering(52) 00:15:50.240 fused_ordering(53) 00:15:50.240 fused_ordering(54) 00:15:50.240 fused_ordering(55) 00:15:50.240 fused_ordering(56) 00:15:50.240 fused_ordering(57) 00:15:50.240 fused_ordering(58) 00:15:50.240 fused_ordering(59) 00:15:50.240 fused_ordering(60) 00:15:50.240 fused_ordering(61) 00:15:50.240 fused_ordering(62) 00:15:50.240 fused_ordering(63) 00:15:50.240 fused_ordering(64) 00:15:50.240 fused_ordering(65) 00:15:50.240 fused_ordering(66) 00:15:50.240 fused_ordering(67) 00:15:50.240 fused_ordering(68) 00:15:50.240 fused_ordering(69) 00:15:50.240 fused_ordering(70) 00:15:50.240 fused_ordering(71) 00:15:50.240 fused_ordering(72) 00:15:50.240 fused_ordering(73) 00:15:50.240 fused_ordering(74) 00:15:50.240 fused_ordering(75) 00:15:50.240 fused_ordering(76) 00:15:50.240 fused_ordering(77) 00:15:50.240 fused_ordering(78) 00:15:50.240 fused_ordering(79) 00:15:50.240 fused_ordering(80) 00:15:50.240 fused_ordering(81) 00:15:50.240 fused_ordering(82) 00:15:50.240 fused_ordering(83) 00:15:50.240 fused_ordering(84) 00:15:50.240 fused_ordering(85) 00:15:50.240 fused_ordering(86) 00:15:50.240 fused_ordering(87) 00:15:50.240 fused_ordering(88) 00:15:50.240 fused_ordering(89) 00:15:50.240 fused_ordering(90) 00:15:50.240 fused_ordering(91) 00:15:50.240 fused_ordering(92) 00:15:50.240 fused_ordering(93) 00:15:50.240 fused_ordering(94) 00:15:50.240 fused_ordering(95) 00:15:50.240 fused_ordering(96) 00:15:50.240 fused_ordering(97) 00:15:50.240 fused_ordering(98) 00:15:50.240 fused_ordering(99) 00:15:50.240 fused_ordering(100) 00:15:50.240 fused_ordering(101) 00:15:50.240 fused_ordering(102) 00:15:50.240 fused_ordering(103) 00:15:50.240 fused_ordering(104) 00:15:50.240 fused_ordering(105) 00:15:50.240 fused_ordering(106) 00:15:50.240 fused_ordering(107) 00:15:50.240 fused_ordering(108) 00:15:50.240 fused_ordering(109) 00:15:50.240 fused_ordering(110) 00:15:50.240 fused_ordering(111) 00:15:50.240 fused_ordering(112) 00:15:50.240 fused_ordering(113) 00:15:50.240 fused_ordering(114) 00:15:50.240 fused_ordering(115) 00:15:50.240 fused_ordering(116) 00:15:50.240 fused_ordering(117) 00:15:50.240 fused_ordering(118) 00:15:50.240 fused_ordering(119) 00:15:50.240 fused_ordering(120) 00:15:50.240 fused_ordering(121) 00:15:50.240 fused_ordering(122) 00:15:50.240 fused_ordering(123) 00:15:50.240 fused_ordering(124) 00:15:50.240 fused_ordering(125) 00:15:50.240 fused_ordering(126) 00:15:50.240 fused_ordering(127) 00:15:50.240 fused_ordering(128) 00:15:50.240 fused_ordering(129) 00:15:50.240 fused_ordering(130) 00:15:50.240 fused_ordering(131) 00:15:50.240 fused_ordering(132) 00:15:50.240 fused_ordering(133) 00:15:50.240 fused_ordering(134) 00:15:50.240 fused_ordering(135) 00:15:50.240 fused_ordering(136) 00:15:50.240 fused_ordering(137) 00:15:50.240 fused_ordering(138) 00:15:50.240 fused_ordering(139) 00:15:50.240 fused_ordering(140) 00:15:50.240 fused_ordering(141) 00:15:50.240 fused_ordering(142) 00:15:50.240 fused_ordering(143) 00:15:50.240 fused_ordering(144) 00:15:50.240 fused_ordering(145) 00:15:50.240 fused_ordering(146) 00:15:50.240 fused_ordering(147) 00:15:50.240 fused_ordering(148) 00:15:50.240 fused_ordering(149) 00:15:50.240 fused_ordering(150) 00:15:50.240 fused_ordering(151) 00:15:50.240 fused_ordering(152) 00:15:50.240 fused_ordering(153) 00:15:50.240 fused_ordering(154) 00:15:50.240 fused_ordering(155) 00:15:50.240 fused_ordering(156) 00:15:50.240 fused_ordering(157) 00:15:50.240 fused_ordering(158) 00:15:50.240 fused_ordering(159) 00:15:50.240 fused_ordering(160) 00:15:50.240 fused_ordering(161) 00:15:50.240 fused_ordering(162) 00:15:50.240 fused_ordering(163) 00:15:50.240 fused_ordering(164) 00:15:50.240 fused_ordering(165) 00:15:50.240 fused_ordering(166) 00:15:50.240 fused_ordering(167) 00:15:50.240 fused_ordering(168) 00:15:50.240 fused_ordering(169) 00:15:50.240 fused_ordering(170) 00:15:50.240 fused_ordering(171) 00:15:50.240 fused_ordering(172) 00:15:50.240 fused_ordering(173) 00:15:50.240 fused_ordering(174) 00:15:50.240 fused_ordering(175) 00:15:50.240 fused_ordering(176) 00:15:50.240 fused_ordering(177) 00:15:50.240 fused_ordering(178) 00:15:50.240 fused_ordering(179) 00:15:50.240 fused_ordering(180) 00:15:50.240 fused_ordering(181) 00:15:50.240 fused_ordering(182) 00:15:50.240 fused_ordering(183) 00:15:50.240 fused_ordering(184) 00:15:50.240 fused_ordering(185) 00:15:50.240 fused_ordering(186) 00:15:50.240 fused_ordering(187) 00:15:50.240 fused_ordering(188) 00:15:50.240 fused_ordering(189) 00:15:50.240 fused_ordering(190) 00:15:50.240 fused_ordering(191) 00:15:50.240 fused_ordering(192) 00:15:50.240 fused_ordering(193) 00:15:50.240 fused_ordering(194) 00:15:50.240 fused_ordering(195) 00:15:50.240 fused_ordering(196) 00:15:50.240 fused_ordering(197) 00:15:50.240 fused_ordering(198) 00:15:50.240 fused_ordering(199) 00:15:50.240 fused_ordering(200) 00:15:50.240 fused_ordering(201) 00:15:50.240 fused_ordering(202) 00:15:50.240 fused_ordering(203) 00:15:50.240 fused_ordering(204) 00:15:50.240 fused_ordering(205) 00:15:50.806 fused_ordering(206) 00:15:50.806 fused_ordering(207) 00:15:50.806 fused_ordering(208) 00:15:50.806 fused_ordering(209) 00:15:50.806 fused_ordering(210) 00:15:50.806 fused_ordering(211) 00:15:50.806 fused_ordering(212) 00:15:50.806 fused_ordering(213) 00:15:50.806 fused_ordering(214) 00:15:50.806 fused_ordering(215) 00:15:50.806 fused_ordering(216) 00:15:50.806 fused_ordering(217) 00:15:50.806 fused_ordering(218) 00:15:50.806 fused_ordering(219) 00:15:50.806 fused_ordering(220) 00:15:50.806 fused_ordering(221) 00:15:50.806 fused_ordering(222) 00:15:50.806 fused_ordering(223) 00:15:50.806 fused_ordering(224) 00:15:50.806 fused_ordering(225) 00:15:50.806 fused_ordering(226) 00:15:50.806 fused_ordering(227) 00:15:50.806 fused_ordering(228) 00:15:50.806 fused_ordering(229) 00:15:50.806 fused_ordering(230) 00:15:50.806 fused_ordering(231) 00:15:50.806 fused_ordering(232) 00:15:50.806 fused_ordering(233) 00:15:50.806 fused_ordering(234) 00:15:50.806 fused_ordering(235) 00:15:50.806 fused_ordering(236) 00:15:50.806 fused_ordering(237) 00:15:50.806 fused_ordering(238) 00:15:50.806 fused_ordering(239) 00:15:50.806 fused_ordering(240) 00:15:50.806 fused_ordering(241) 00:15:50.806 fused_ordering(242) 00:15:50.806 fused_ordering(243) 00:15:50.806 fused_ordering(244) 00:15:50.806 fused_ordering(245) 00:15:50.806 fused_ordering(246) 00:15:50.806 fused_ordering(247) 00:15:50.806 fused_ordering(248) 00:15:50.806 fused_ordering(249) 00:15:50.806 fused_ordering(250) 00:15:50.806 fused_ordering(251) 00:15:50.806 fused_ordering(252) 00:15:50.806 fused_ordering(253) 00:15:50.806 fused_ordering(254) 00:15:50.806 fused_ordering(255) 00:15:50.806 fused_ordering(256) 00:15:50.806 fused_ordering(257) 00:15:50.806 fused_ordering(258) 00:15:50.806 fused_ordering(259) 00:15:50.806 fused_ordering(260) 00:15:50.806 fused_ordering(261) 00:15:50.806 fused_ordering(262) 00:15:50.806 fused_ordering(263) 00:15:50.806 fused_ordering(264) 00:15:50.806 fused_ordering(265) 00:15:50.806 fused_ordering(266) 00:15:50.806 fused_ordering(267) 00:15:50.806 fused_ordering(268) 00:15:50.806 fused_ordering(269) 00:15:50.806 fused_ordering(270) 00:15:50.806 fused_ordering(271) 00:15:50.806 fused_ordering(272) 00:15:50.806 fused_ordering(273) 00:15:50.806 fused_ordering(274) 00:15:50.806 fused_ordering(275) 00:15:50.806 fused_ordering(276) 00:15:50.806 fused_ordering(277) 00:15:50.806 fused_ordering(278) 00:15:50.806 fused_ordering(279) 00:15:50.806 fused_ordering(280) 00:15:50.806 fused_ordering(281) 00:15:50.806 fused_ordering(282) 00:15:50.806 fused_ordering(283) 00:15:50.806 fused_ordering(284) 00:15:50.806 fused_ordering(285) 00:15:50.806 fused_ordering(286) 00:15:50.806 fused_ordering(287) 00:15:50.806 fused_ordering(288) 00:15:50.806 fused_ordering(289) 00:15:50.806 fused_ordering(290) 00:15:50.806 fused_ordering(291) 00:15:50.806 fused_ordering(292) 00:15:50.806 fused_ordering(293) 00:15:50.806 fused_ordering(294) 00:15:50.806 fused_ordering(295) 00:15:50.806 fused_ordering(296) 00:15:50.806 fused_ordering(297) 00:15:50.806 fused_ordering(298) 00:15:50.806 fused_ordering(299) 00:15:50.806 fused_ordering(300) 00:15:50.806 fused_ordering(301) 00:15:50.806 fused_ordering(302) 00:15:50.806 fused_ordering(303) 00:15:50.806 fused_ordering(304) 00:15:50.806 fused_ordering(305) 00:15:50.806 fused_ordering(306) 00:15:50.806 fused_ordering(307) 00:15:50.806 fused_ordering(308) 00:15:50.806 fused_ordering(309) 00:15:50.806 fused_ordering(310) 00:15:50.806 fused_ordering(311) 00:15:50.806 fused_ordering(312) 00:15:50.806 fused_ordering(313) 00:15:50.806 fused_ordering(314) 00:15:50.806 fused_ordering(315) 00:15:50.806 fused_ordering(316) 00:15:50.806 fused_ordering(317) 00:15:50.806 fused_ordering(318) 00:15:50.806 fused_ordering(319) 00:15:50.806 fused_ordering(320) 00:15:50.806 fused_ordering(321) 00:15:50.806 fused_ordering(322) 00:15:50.806 fused_ordering(323) 00:15:50.806 fused_ordering(324) 00:15:50.806 fused_ordering(325) 00:15:50.806 fused_ordering(326) 00:15:50.806 fused_ordering(327) 00:15:50.806 fused_ordering(328) 00:15:50.806 fused_ordering(329) 00:15:50.806 fused_ordering(330) 00:15:50.806 fused_ordering(331) 00:15:50.806 fused_ordering(332) 00:15:50.806 fused_ordering(333) 00:15:50.806 fused_ordering(334) 00:15:50.806 fused_ordering(335) 00:15:50.806 fused_ordering(336) 00:15:50.806 fused_ordering(337) 00:15:50.806 fused_ordering(338) 00:15:50.806 fused_ordering(339) 00:15:50.806 fused_ordering(340) 00:15:50.806 fused_ordering(341) 00:15:50.806 fused_ordering(342) 00:15:50.806 fused_ordering(343) 00:15:50.806 fused_ordering(344) 00:15:50.806 fused_ordering(345) 00:15:50.806 fused_ordering(346) 00:15:50.806 fused_ordering(347) 00:15:50.806 fused_ordering(348) 00:15:50.806 fused_ordering(349) 00:15:50.806 fused_ordering(350) 00:15:50.806 fused_ordering(351) 00:15:50.806 fused_ordering(352) 00:15:50.806 fused_ordering(353) 00:15:50.806 fused_ordering(354) 00:15:50.806 fused_ordering(355) 00:15:50.806 fused_ordering(356) 00:15:50.806 fused_ordering(357) 00:15:50.806 fused_ordering(358) 00:15:50.806 fused_ordering(359) 00:15:50.806 fused_ordering(360) 00:15:50.806 fused_ordering(361) 00:15:50.806 fused_ordering(362) 00:15:50.806 fused_ordering(363) 00:15:50.806 fused_ordering(364) 00:15:50.806 fused_ordering(365) 00:15:50.806 fused_ordering(366) 00:15:50.806 fused_ordering(367) 00:15:50.806 fused_ordering(368) 00:15:50.806 fused_ordering(369) 00:15:50.806 fused_ordering(370) 00:15:50.806 fused_ordering(371) 00:15:50.806 fused_ordering(372) 00:15:50.806 fused_ordering(373) 00:15:50.806 fused_ordering(374) 00:15:50.806 fused_ordering(375) 00:15:50.806 fused_ordering(376) 00:15:50.806 fused_ordering(377) 00:15:50.806 fused_ordering(378) 00:15:50.806 fused_ordering(379) 00:15:50.806 fused_ordering(380) 00:15:50.806 fused_ordering(381) 00:15:50.806 fused_ordering(382) 00:15:50.806 fused_ordering(383) 00:15:50.806 fused_ordering(384) 00:15:50.806 fused_ordering(385) 00:15:50.806 fused_ordering(386) 00:15:50.806 fused_ordering(387) 00:15:50.806 fused_ordering(388) 00:15:50.806 fused_ordering(389) 00:15:50.806 fused_ordering(390) 00:15:50.806 fused_ordering(391) 00:15:50.806 fused_ordering(392) 00:15:50.806 fused_ordering(393) 00:15:50.806 fused_ordering(394) 00:15:50.806 fused_ordering(395) 00:15:50.806 fused_ordering(396) 00:15:50.806 fused_ordering(397) 00:15:50.806 fused_ordering(398) 00:15:50.806 fused_ordering(399) 00:15:50.806 fused_ordering(400) 00:15:50.806 fused_ordering(401) 00:15:50.806 fused_ordering(402) 00:15:50.806 fused_ordering(403) 00:15:50.806 fused_ordering(404) 00:15:50.806 fused_ordering(405) 00:15:50.806 fused_ordering(406) 00:15:50.806 fused_ordering(407) 00:15:50.806 fused_ordering(408) 00:15:50.806 fused_ordering(409) 00:15:50.806 fused_ordering(410) 00:15:51.371 fused_ordering(411) 00:15:51.371 fused_ordering(412) 00:15:51.371 fused_ordering(413) 00:15:51.371 fused_ordering(414) 00:15:51.371 fused_ordering(415) 00:15:51.371 fused_ordering(416) 00:15:51.371 fused_ordering(417) 00:15:51.371 fused_ordering(418) 00:15:51.371 fused_ordering(419) 00:15:51.371 fused_ordering(420) 00:15:51.371 fused_ordering(421) 00:15:51.371 fused_ordering(422) 00:15:51.371 fused_ordering(423) 00:15:51.371 fused_ordering(424) 00:15:51.371 fused_ordering(425) 00:15:51.371 fused_ordering(426) 00:15:51.371 fused_ordering(427) 00:15:51.371 fused_ordering(428) 00:15:51.371 fused_ordering(429) 00:15:51.371 fused_ordering(430) 00:15:51.371 fused_ordering(431) 00:15:51.371 fused_ordering(432) 00:15:51.371 fused_ordering(433) 00:15:51.371 fused_ordering(434) 00:15:51.371 fused_ordering(435) 00:15:51.371 fused_ordering(436) 00:15:51.371 fused_ordering(437) 00:15:51.371 fused_ordering(438) 00:15:51.371 fused_ordering(439) 00:15:51.371 fused_ordering(440) 00:15:51.371 fused_ordering(441) 00:15:51.371 fused_ordering(442) 00:15:51.371 fused_ordering(443) 00:15:51.371 fused_ordering(444) 00:15:51.371 fused_ordering(445) 00:15:51.372 fused_ordering(446) 00:15:51.372 fused_ordering(447) 00:15:51.372 fused_ordering(448) 00:15:51.372 fused_ordering(449) 00:15:51.372 fused_ordering(450) 00:15:51.372 fused_ordering(451) 00:15:51.372 fused_ordering(452) 00:15:51.372 fused_ordering(453) 00:15:51.372 fused_ordering(454) 00:15:51.372 fused_ordering(455) 00:15:51.372 fused_ordering(456) 00:15:51.372 fused_ordering(457) 00:15:51.372 fused_ordering(458) 00:15:51.372 fused_ordering(459) 00:15:51.372 fused_ordering(460) 00:15:51.372 fused_ordering(461) 00:15:51.372 fused_ordering(462) 00:15:51.372 fused_ordering(463) 00:15:51.372 fused_ordering(464) 00:15:51.372 fused_ordering(465) 00:15:51.372 fused_ordering(466) 00:15:51.372 fused_ordering(467) 00:15:51.372 fused_ordering(468) 00:15:51.372 fused_ordering(469) 00:15:51.372 fused_ordering(470) 00:15:51.372 fused_ordering(471) 00:15:51.372 fused_ordering(472) 00:15:51.372 fused_ordering(473) 00:15:51.372 fused_ordering(474) 00:15:51.372 fused_ordering(475) 00:15:51.372 fused_ordering(476) 00:15:51.372 fused_ordering(477) 00:15:51.372 fused_ordering(478) 00:15:51.372 fused_ordering(479) 00:15:51.372 fused_ordering(480) 00:15:51.372 fused_ordering(481) 00:15:51.372 fused_ordering(482) 00:15:51.372 fused_ordering(483) 00:15:51.372 fused_ordering(484) 00:15:51.372 fused_ordering(485) 00:15:51.372 fused_ordering(486) 00:15:51.372 fused_ordering(487) 00:15:51.372 fused_ordering(488) 00:15:51.372 fused_ordering(489) 00:15:51.372 fused_ordering(490) 00:15:51.372 fused_ordering(491) 00:15:51.372 fused_ordering(492) 00:15:51.372 fused_ordering(493) 00:15:51.372 fused_ordering(494) 00:15:51.372 fused_ordering(495) 00:15:51.372 fused_ordering(496) 00:15:51.372 fused_ordering(497) 00:15:51.372 fused_ordering(498) 00:15:51.372 fused_ordering(499) 00:15:51.372 fused_ordering(500) 00:15:51.372 fused_ordering(501) 00:15:51.372 fused_ordering(502) 00:15:51.372 fused_ordering(503) 00:15:51.372 fused_ordering(504) 00:15:51.372 fused_ordering(505) 00:15:51.372 fused_ordering(506) 00:15:51.372 fused_ordering(507) 00:15:51.372 fused_ordering(508) 00:15:51.372 fused_ordering(509) 00:15:51.372 fused_ordering(510) 00:15:51.372 fused_ordering(511) 00:15:51.372 fused_ordering(512) 00:15:51.372 fused_ordering(513) 00:15:51.372 fused_ordering(514) 00:15:51.372 fused_ordering(515) 00:15:51.372 fused_ordering(516) 00:15:51.372 fused_ordering(517) 00:15:51.372 fused_ordering(518) 00:15:51.372 fused_ordering(519) 00:15:51.372 fused_ordering(520) 00:15:51.372 fused_ordering(521) 00:15:51.372 fused_ordering(522) 00:15:51.372 fused_ordering(523) 00:15:51.372 fused_ordering(524) 00:15:51.372 fused_ordering(525) 00:15:51.372 fused_ordering(526) 00:15:51.372 fused_ordering(527) 00:15:51.372 fused_ordering(528) 00:15:51.372 fused_ordering(529) 00:15:51.372 fused_ordering(530) 00:15:51.372 fused_ordering(531) 00:15:51.372 fused_ordering(532) 00:15:51.372 fused_ordering(533) 00:15:51.372 fused_ordering(534) 00:15:51.372 fused_ordering(535) 00:15:51.372 fused_ordering(536) 00:15:51.372 fused_ordering(537) 00:15:51.372 fused_ordering(538) 00:15:51.372 fused_ordering(539) 00:15:51.372 fused_ordering(540) 00:15:51.372 fused_ordering(541) 00:15:51.372 fused_ordering(542) 00:15:51.372 fused_ordering(543) 00:15:51.372 fused_ordering(544) 00:15:51.372 fused_ordering(545) 00:15:51.372 fused_ordering(546) 00:15:51.372 fused_ordering(547) 00:15:51.372 fused_ordering(548) 00:15:51.372 fused_ordering(549) 00:15:51.372 fused_ordering(550) 00:15:51.372 fused_ordering(551) 00:15:51.372 fused_ordering(552) 00:15:51.372 fused_ordering(553) 00:15:51.372 fused_ordering(554) 00:15:51.372 fused_ordering(555) 00:15:51.372 fused_ordering(556) 00:15:51.372 fused_ordering(557) 00:15:51.372 fused_ordering(558) 00:15:51.372 fused_ordering(559) 00:15:51.372 fused_ordering(560) 00:15:51.372 fused_ordering(561) 00:15:51.372 fused_ordering(562) 00:15:51.372 fused_ordering(563) 00:15:51.372 fused_ordering(564) 00:15:51.372 fused_ordering(565) 00:15:51.372 fused_ordering(566) 00:15:51.372 fused_ordering(567) 00:15:51.372 fused_ordering(568) 00:15:51.372 fused_ordering(569) 00:15:51.372 fused_ordering(570) 00:15:51.372 fused_ordering(571) 00:15:51.372 fused_ordering(572) 00:15:51.372 fused_ordering(573) 00:15:51.372 fused_ordering(574) 00:15:51.372 fused_ordering(575) 00:15:51.372 fused_ordering(576) 00:15:51.372 fused_ordering(577) 00:15:51.372 fused_ordering(578) 00:15:51.372 fused_ordering(579) 00:15:51.372 fused_ordering(580) 00:15:51.372 fused_ordering(581) 00:15:51.372 fused_ordering(582) 00:15:51.372 fused_ordering(583) 00:15:51.372 fused_ordering(584) 00:15:51.372 fused_ordering(585) 00:15:51.372 fused_ordering(586) 00:15:51.372 fused_ordering(587) 00:15:51.372 fused_ordering(588) 00:15:51.372 fused_ordering(589) 00:15:51.372 fused_ordering(590) 00:15:51.372 fused_ordering(591) 00:15:51.372 fused_ordering(592) 00:15:51.372 fused_ordering(593) 00:15:51.372 fused_ordering(594) 00:15:51.372 fused_ordering(595) 00:15:51.372 fused_ordering(596) 00:15:51.372 fused_ordering(597) 00:15:51.372 fused_ordering(598) 00:15:51.372 fused_ordering(599) 00:15:51.372 fused_ordering(600) 00:15:51.372 fused_ordering(601) 00:15:51.372 fused_ordering(602) 00:15:51.372 fused_ordering(603) 00:15:51.372 fused_ordering(604) 00:15:51.372 fused_ordering(605) 00:15:51.372 fused_ordering(606) 00:15:51.372 fused_ordering(607) 00:15:51.372 fused_ordering(608) 00:15:51.372 fused_ordering(609) 00:15:51.372 fused_ordering(610) 00:15:51.372 fused_ordering(611) 00:15:51.372 fused_ordering(612) 00:15:51.372 fused_ordering(613) 00:15:51.372 fused_ordering(614) 00:15:51.372 fused_ordering(615) 00:15:51.937 fused_ordering(616) 00:15:51.937 fused_ordering(617) 00:15:51.937 fused_ordering(618) 00:15:51.937 fused_ordering(619) 00:15:51.937 fused_ordering(620) 00:15:51.937 fused_ordering(621) 00:15:51.937 fused_ordering(622) 00:15:51.937 fused_ordering(623) 00:15:51.937 fused_ordering(624) 00:15:51.937 fused_ordering(625) 00:15:51.937 fused_ordering(626) 00:15:51.937 fused_ordering(627) 00:15:51.937 fused_ordering(628) 00:15:51.937 fused_ordering(629) 00:15:51.937 fused_ordering(630) 00:15:51.937 fused_ordering(631) 00:15:51.937 fused_ordering(632) 00:15:51.937 fused_ordering(633) 00:15:51.937 fused_ordering(634) 00:15:51.937 fused_ordering(635) 00:15:51.937 fused_ordering(636) 00:15:51.937 fused_ordering(637) 00:15:51.937 fused_ordering(638) 00:15:51.937 fused_ordering(639) 00:15:51.938 fused_ordering(640) 00:15:51.938 fused_ordering(641) 00:15:51.938 fused_ordering(642) 00:15:51.938 fused_ordering(643) 00:15:51.938 fused_ordering(644) 00:15:51.938 fused_ordering(645) 00:15:51.938 fused_ordering(646) 00:15:51.938 fused_ordering(647) 00:15:51.938 fused_ordering(648) 00:15:51.938 fused_ordering(649) 00:15:51.938 fused_ordering(650) 00:15:51.938 fused_ordering(651) 00:15:51.938 fused_ordering(652) 00:15:51.938 fused_ordering(653) 00:15:51.938 fused_ordering(654) 00:15:51.938 fused_ordering(655) 00:15:51.938 fused_ordering(656) 00:15:51.938 fused_ordering(657) 00:15:51.938 fused_ordering(658) 00:15:51.938 fused_ordering(659) 00:15:51.938 fused_ordering(660) 00:15:51.938 fused_ordering(661) 00:15:51.938 fused_ordering(662) 00:15:51.938 fused_ordering(663) 00:15:51.938 fused_ordering(664) 00:15:51.938 fused_ordering(665) 00:15:51.938 fused_ordering(666) 00:15:51.938 fused_ordering(667) 00:15:51.938 fused_ordering(668) 00:15:51.938 fused_ordering(669) 00:15:51.938 fused_ordering(670) 00:15:51.938 fused_ordering(671) 00:15:51.938 fused_ordering(672) 00:15:51.938 fused_ordering(673) 00:15:51.938 fused_ordering(674) 00:15:51.938 fused_ordering(675) 00:15:51.938 fused_ordering(676) 00:15:51.938 fused_ordering(677) 00:15:51.938 fused_ordering(678) 00:15:51.938 fused_ordering(679) 00:15:51.938 fused_ordering(680) 00:15:51.938 fused_ordering(681) 00:15:51.938 fused_ordering(682) 00:15:51.938 fused_ordering(683) 00:15:51.938 fused_ordering(684) 00:15:51.938 fused_ordering(685) 00:15:51.938 fused_ordering(686) 00:15:51.938 fused_ordering(687) 00:15:51.938 fused_ordering(688) 00:15:51.938 fused_ordering(689) 00:15:51.938 fused_ordering(690) 00:15:51.938 fused_ordering(691) 00:15:51.938 fused_ordering(692) 00:15:51.938 fused_ordering(693) 00:15:51.938 fused_ordering(694) 00:15:51.938 fused_ordering(695) 00:15:51.938 fused_ordering(696) 00:15:51.938 fused_ordering(697) 00:15:51.938 fused_ordering(698) 00:15:51.938 fused_ordering(699) 00:15:51.938 fused_ordering(700) 00:15:51.938 fused_ordering(701) 00:15:51.938 fused_ordering(702) 00:15:51.938 fused_ordering(703) 00:15:51.938 fused_ordering(704) 00:15:51.938 fused_ordering(705) 00:15:51.938 fused_ordering(706) 00:15:51.938 fused_ordering(707) 00:15:51.938 fused_ordering(708) 00:15:51.938 fused_ordering(709) 00:15:51.938 fused_ordering(710) 00:15:51.938 fused_ordering(711) 00:15:51.938 fused_ordering(712) 00:15:51.938 fused_ordering(713) 00:15:51.938 fused_ordering(714) 00:15:51.938 fused_ordering(715) 00:15:51.938 fused_ordering(716) 00:15:51.938 fused_ordering(717) 00:15:51.938 fused_ordering(718) 00:15:51.938 fused_ordering(719) 00:15:51.938 fused_ordering(720) 00:15:51.938 fused_ordering(721) 00:15:51.938 fused_ordering(722) 00:15:51.938 fused_ordering(723) 00:15:51.938 fused_ordering(724) 00:15:51.938 fused_ordering(725) 00:15:51.938 fused_ordering(726) 00:15:51.938 fused_ordering(727) 00:15:51.938 fused_ordering(728) 00:15:51.938 fused_ordering(729) 00:15:51.938 fused_ordering(730) 00:15:51.938 fused_ordering(731) 00:15:51.938 fused_ordering(732) 00:15:51.938 fused_ordering(733) 00:15:51.938 fused_ordering(734) 00:15:51.938 fused_ordering(735) 00:15:51.938 fused_ordering(736) 00:15:51.938 fused_ordering(737) 00:15:51.938 fused_ordering(738) 00:15:51.938 fused_ordering(739) 00:15:51.938 fused_ordering(740) 00:15:51.938 fused_ordering(741) 00:15:51.938 fused_ordering(742) 00:15:51.938 fused_ordering(743) 00:15:51.938 fused_ordering(744) 00:15:51.938 fused_ordering(745) 00:15:51.938 fused_ordering(746) 00:15:51.938 fused_ordering(747) 00:15:51.938 fused_ordering(748) 00:15:51.938 fused_ordering(749) 00:15:51.938 fused_ordering(750) 00:15:51.938 fused_ordering(751) 00:15:51.938 fused_ordering(752) 00:15:51.938 fused_ordering(753) 00:15:51.938 fused_ordering(754) 00:15:51.938 fused_ordering(755) 00:15:51.938 fused_ordering(756) 00:15:51.938 fused_ordering(757) 00:15:51.938 fused_ordering(758) 00:15:51.938 fused_ordering(759) 00:15:51.938 fused_ordering(760) 00:15:51.938 fused_ordering(761) 00:15:51.938 fused_ordering(762) 00:15:51.938 fused_ordering(763) 00:15:51.938 fused_ordering(764) 00:15:51.938 fused_ordering(765) 00:15:51.938 fused_ordering(766) 00:15:51.938 fused_ordering(767) 00:15:51.938 fused_ordering(768) 00:15:51.938 fused_ordering(769) 00:15:51.938 fused_ordering(770) 00:15:51.938 fused_ordering(771) 00:15:51.938 fused_ordering(772) 00:15:51.938 fused_ordering(773) 00:15:51.938 fused_ordering(774) 00:15:51.938 fused_ordering(775) 00:15:51.938 fused_ordering(776) 00:15:51.938 fused_ordering(777) 00:15:51.938 fused_ordering(778) 00:15:51.938 fused_ordering(779) 00:15:51.938 fused_ordering(780) 00:15:51.938 fused_ordering(781) 00:15:51.938 fused_ordering(782) 00:15:51.938 fused_ordering(783) 00:15:51.938 fused_ordering(784) 00:15:51.938 fused_ordering(785) 00:15:51.938 fused_ordering(786) 00:15:51.938 fused_ordering(787) 00:15:51.938 fused_ordering(788) 00:15:51.938 fused_ordering(789) 00:15:51.938 fused_ordering(790) 00:15:51.938 fused_ordering(791) 00:15:51.938 fused_ordering(792) 00:15:51.938 fused_ordering(793) 00:15:51.938 fused_ordering(794) 00:15:51.938 fused_ordering(795) 00:15:51.938 fused_ordering(796) 00:15:51.938 fused_ordering(797) 00:15:51.938 fused_ordering(798) 00:15:51.938 fused_ordering(799) 00:15:51.938 fused_ordering(800) 00:15:51.938 fused_ordering(801) 00:15:51.938 fused_ordering(802) 00:15:51.938 fused_ordering(803) 00:15:51.938 fused_ordering(804) 00:15:51.938 fused_ordering(805) 00:15:51.938 fused_ordering(806) 00:15:51.938 fused_ordering(807) 00:15:51.938 fused_ordering(808) 00:15:51.938 fused_ordering(809) 00:15:51.938 fused_ordering(810) 00:15:51.938 fused_ordering(811) 00:15:51.938 fused_ordering(812) 00:15:51.938 fused_ordering(813) 00:15:51.938 fused_ordering(814) 00:15:51.938 fused_ordering(815) 00:15:51.938 fused_ordering(816) 00:15:51.938 fused_ordering(817) 00:15:51.938 fused_ordering(818) 00:15:51.938 fused_ordering(819) 00:15:51.938 fused_ordering(820) 00:15:52.504 fused_ordering(821) 00:15:52.504 fused_ordering(822) 00:15:52.504 fused_ordering(823) 00:15:52.504 fused_ordering(824) 00:15:52.504 fused_ordering(825) 00:15:52.504 fused_ordering(826) 00:15:52.504 fused_ordering(827) 00:15:52.504 fused_ordering(828) 00:15:52.504 fused_ordering(829) 00:15:52.504 fused_ordering(830) 00:15:52.504 fused_ordering(831) 00:15:52.504 fused_ordering(832) 00:15:52.504 fused_ordering(833) 00:15:52.504 fused_ordering(834) 00:15:52.504 fused_ordering(835) 00:15:52.504 fused_ordering(836) 00:15:52.504 fused_ordering(837) 00:15:52.504 fused_ordering(838) 00:15:52.504 fused_ordering(839) 00:15:52.504 fused_ordering(840) 00:15:52.504 fused_ordering(841) 00:15:52.504 fused_ordering(842) 00:15:52.504 fused_ordering(843) 00:15:52.504 fused_ordering(844) 00:15:52.504 fused_ordering(845) 00:15:52.504 fused_ordering(846) 00:15:52.504 fused_ordering(847) 00:15:52.504 fused_ordering(848) 00:15:52.504 fused_ordering(849) 00:15:52.504 fused_ordering(850) 00:15:52.504 fused_ordering(851) 00:15:52.504 fused_ordering(852) 00:15:52.504 fused_ordering(853) 00:15:52.504 fused_ordering(854) 00:15:52.504 fused_ordering(855) 00:15:52.504 fused_ordering(856) 00:15:52.504 fused_ordering(857) 00:15:52.504 fused_ordering(858) 00:15:52.504 fused_ordering(859) 00:15:52.504 fused_ordering(860) 00:15:52.504 fused_ordering(861) 00:15:52.504 fused_ordering(862) 00:15:52.504 fused_ordering(863) 00:15:52.504 fused_ordering(864) 00:15:52.504 fused_ordering(865) 00:15:52.504 fused_ordering(866) 00:15:52.504 fused_ordering(867) 00:15:52.504 fused_ordering(868) 00:15:52.504 fused_ordering(869) 00:15:52.504 fused_ordering(870) 00:15:52.504 fused_ordering(871) 00:15:52.504 fused_ordering(872) 00:15:52.504 fused_ordering(873) 00:15:52.504 fused_ordering(874) 00:15:52.504 fused_ordering(875) 00:15:52.504 fused_ordering(876) 00:15:52.504 fused_ordering(877) 00:15:52.504 fused_ordering(878) 00:15:52.504 fused_ordering(879) 00:15:52.504 fused_ordering(880) 00:15:52.504 fused_ordering(881) 00:15:52.504 fused_ordering(882) 00:15:52.504 fused_ordering(883) 00:15:52.504 fused_ordering(884) 00:15:52.504 fused_ordering(885) 00:15:52.504 fused_ordering(886) 00:15:52.504 fused_ordering(887) 00:15:52.504 fused_ordering(888) 00:15:52.504 fused_ordering(889) 00:15:52.504 fused_ordering(890) 00:15:52.504 fused_ordering(891) 00:15:52.504 fused_ordering(892) 00:15:52.504 fused_ordering(893) 00:15:52.504 fused_ordering(894) 00:15:52.504 fused_ordering(895) 00:15:52.504 fused_ordering(896) 00:15:52.504 fused_ordering(897) 00:15:52.504 fused_ordering(898) 00:15:52.504 fused_ordering(899) 00:15:52.504 fused_ordering(900) 00:15:52.504 fused_ordering(901) 00:15:52.504 fused_ordering(902) 00:15:52.504 fused_ordering(903) 00:15:52.504 fused_ordering(904) 00:15:52.504 fused_ordering(905) 00:15:52.504 fused_ordering(906) 00:15:52.504 fused_ordering(907) 00:15:52.504 fused_ordering(908) 00:15:52.504 fused_ordering(909) 00:15:52.504 fused_ordering(910) 00:15:52.504 fused_ordering(911) 00:15:52.504 fused_ordering(912) 00:15:52.504 fused_ordering(913) 00:15:52.504 fused_ordering(914) 00:15:52.504 fused_ordering(915) 00:15:52.504 fused_ordering(916) 00:15:52.504 fused_ordering(917) 00:15:52.504 fused_ordering(918) 00:15:52.504 fused_ordering(919) 00:15:52.504 fused_ordering(920) 00:15:52.504 fused_ordering(921) 00:15:52.504 fused_ordering(922) 00:15:52.504 fused_ordering(923) 00:15:52.504 fused_ordering(924) 00:15:52.504 fused_ordering(925) 00:15:52.504 fused_ordering(926) 00:15:52.504 fused_ordering(927) 00:15:52.504 fused_ordering(928) 00:15:52.504 fused_ordering(929) 00:15:52.505 fused_ordering(930) 00:15:52.505 fused_ordering(931) 00:15:52.505 fused_ordering(932) 00:15:52.505 fused_ordering(933) 00:15:52.505 fused_ordering(934) 00:15:52.505 fused_ordering(935) 00:15:52.505 fused_ordering(936) 00:15:52.505 fused_ordering(937) 00:15:52.505 fused_ordering(938) 00:15:52.505 fused_ordering(939) 00:15:52.505 fused_ordering(940) 00:15:52.505 fused_ordering(941) 00:15:52.505 fused_ordering(942) 00:15:52.505 fused_ordering(943) 00:15:52.505 fused_ordering(944) 00:15:52.505 fused_ordering(945) 00:15:52.505 fused_ordering(946) 00:15:52.505 fused_ordering(947) 00:15:52.505 fused_ordering(948) 00:15:52.505 fused_ordering(949) 00:15:52.505 fused_ordering(950) 00:15:52.505 fused_ordering(951) 00:15:52.505 fused_ordering(952) 00:15:52.505 fused_ordering(953) 00:15:52.505 fused_ordering(954) 00:15:52.505 fused_ordering(955) 00:15:52.505 fused_ordering(956) 00:15:52.505 fused_ordering(957) 00:15:52.505 fused_ordering(958) 00:15:52.505 fused_ordering(959) 00:15:52.505 fused_ordering(960) 00:15:52.505 fused_ordering(961) 00:15:52.505 fused_ordering(962) 00:15:52.505 fused_ordering(963) 00:15:52.505 fused_ordering(964) 00:15:52.505 fused_ordering(965) 00:15:52.505 fused_ordering(966) 00:15:52.505 fused_ordering(967) 00:15:52.505 fused_ordering(968) 00:15:52.505 fused_ordering(969) 00:15:52.505 fused_ordering(970) 00:15:52.505 fused_ordering(971) 00:15:52.505 fused_ordering(972) 00:15:52.505 fused_ordering(973) 00:15:52.505 fused_ordering(974) 00:15:52.505 fused_ordering(975) 00:15:52.505 fused_ordering(976) 00:15:52.505 fused_ordering(977) 00:15:52.505 fused_ordering(978) 00:15:52.505 fused_ordering(979) 00:15:52.505 fused_ordering(980) 00:15:52.505 fused_ordering(981) 00:15:52.505 fused_ordering(982) 00:15:52.505 fused_ordering(983) 00:15:52.505 fused_ordering(984) 00:15:52.505 fused_ordering(985) 00:15:52.505 fused_ordering(986) 00:15:52.505 fused_ordering(987) 00:15:52.505 fused_ordering(988) 00:15:52.505 fused_ordering(989) 00:15:52.505 fused_ordering(990) 00:15:52.505 fused_ordering(991) 00:15:52.505 fused_ordering(992) 00:15:52.505 fused_ordering(993) 00:15:52.505 fused_ordering(994) 00:15:52.505 fused_ordering(995) 00:15:52.505 fused_ordering(996) 00:15:52.505 fused_ordering(997) 00:15:52.505 fused_ordering(998) 00:15:52.505 fused_ordering(999) 00:15:52.505 fused_ordering(1000) 00:15:52.505 fused_ordering(1001) 00:15:52.505 fused_ordering(1002) 00:15:52.505 fused_ordering(1003) 00:15:52.505 fused_ordering(1004) 00:15:52.505 fused_ordering(1005) 00:15:52.505 fused_ordering(1006) 00:15:52.505 fused_ordering(1007) 00:15:52.505 fused_ordering(1008) 00:15:52.505 fused_ordering(1009) 00:15:52.505 fused_ordering(1010) 00:15:52.505 fused_ordering(1011) 00:15:52.505 fused_ordering(1012) 00:15:52.505 fused_ordering(1013) 00:15:52.505 fused_ordering(1014) 00:15:52.505 fused_ordering(1015) 00:15:52.505 fused_ordering(1016) 00:15:52.505 fused_ordering(1017) 00:15:52.505 fused_ordering(1018) 00:15:52.505 fused_ordering(1019) 00:15:52.505 fused_ordering(1020) 00:15:52.505 fused_ordering(1021) 00:15:52.505 fused_ordering(1022) 00:15:52.505 fused_ordering(1023) 00:15:52.505 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:52.505 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:52.505 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:52.505 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:52.505 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:52.505 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:52.505 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:52.505 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:52.764 rmmod nvme_tcp 00:15:52.764 rmmod nvme_fabrics 00:15:52.764 rmmod nvme_keyring 00:15:52.764 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:52.764 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:52.764 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:52.764 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1367623 ']' 00:15:52.764 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1367623 00:15:52.764 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1367623 ']' 00:15:52.764 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1367623 00:15:52.764 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:15:52.764 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.764 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1367623 00:15:52.764 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:52.764 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:52.764 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1367623' 00:15:52.764 killing process with pid 1367623 00:15:52.764 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1367623 00:15:52.764 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1367623 00:15:53.023 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:53.023 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:53.023 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:53.023 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:53.023 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:53.023 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.023 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.023 23:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.923 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:54.923 00:15:54.923 real 0m7.898s 00:15:54.923 user 0m5.582s 00:15:54.923 sys 0m3.481s 00:15:54.923 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:54.923 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:54.923 ************************************ 00:15:54.923 END TEST nvmf_fused_ordering 00:15:54.923 ************************************ 00:15:54.923 23:21:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:54.923 23:21:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:54.923 23:21:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:54.923 23:21:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.923 ************************************ 00:15:54.923 START TEST nvmf_ns_masking 00:15:54.923 ************************************ 00:15:54.923 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:55.181 * Looking for test storage... 00:15:55.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:55.181 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:55.181 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:55.181 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.181 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=a1692ca9-9163-449d-8e4a-5303d8b7f863 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c0e97e97-4499-43de-a54e-6bdeae0a4147 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=489b0c3a-efad-4cbd-9eca-22b766fb8073 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:55.182 23:21:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.090 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:57.091 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:57.091 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:57.091 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:57.091 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:57.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:15:57.091 00:15:57.091 --- 10.0.0.2 ping statistics --- 00:15:57.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.091 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:57.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:15:57.091 00:15:57.091 --- 10.0.0.1 ping statistics --- 00:15:57.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.091 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1369931 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1369931 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1369931 ']' 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:57.091 23:21:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:57.348 [2024-07-25 23:21:54.848722] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:15:57.348 [2024-07-25 23:21:54.848810] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.348 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.348 [2024-07-25 23:21:54.893155] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:57.348 [2024-07-25 23:21:54.924102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.348 [2024-07-25 23:21:55.019153] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.348 [2024-07-25 23:21:55.019210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.349 [2024-07-25 23:21:55.019236] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.349 [2024-07-25 23:21:55.019258] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.349 [2024-07-25 23:21:55.019271] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.349 [2024-07-25 23:21:55.019305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.606 23:21:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:57.606 23:21:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:57.606 23:21:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:57.606 23:21:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:57.606 23:21:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:57.606 23:21:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.606 23:21:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:57.863 [2024-07-25 23:21:55.438210] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.863 23:21:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:57.863 23:21:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:57.863 23:21:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:58.121 Malloc1 00:15:58.121 23:21:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:58.378 Malloc2 00:15:58.378 23:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:58.636 23:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:58.893 23:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:59.150 [2024-07-25 23:21:56.817295] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.150 23:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:59.150 23:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 489b0c3a-efad-4cbd-9eca-22b766fb8073 -a 10.0.0.2 -s 4420 -i 4 00:15:59.408 23:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:59.408 23:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:59.408 23:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.408 23:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:59.408 23:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:01.304 23:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:01.304 23:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:01.304 23:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:01.304 23:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:01.304 23:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.304 23:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:01.304 23:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:01.304 23:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:01.304 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:01.304 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:01.304 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:01.304 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:01.304 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:01.304 [ 0]:0x1 00:16:01.304 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:01.304 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:01.562 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7eb06ec5ac8c4e9abd6a3300b00cc949 00:16:01.562 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7eb06ec5ac8c4e9abd6a3300b00cc949 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:01.562 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:01.820 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:01.820 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:01.820 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:01.820 [ 0]:0x1 00:16:01.820 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:01.820 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:01.820 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7eb06ec5ac8c4e9abd6a3300b00cc949 00:16:01.820 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7eb06ec5ac8c4e9abd6a3300b00cc949 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:01.821 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:01.821 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:01.821 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:01.821 [ 1]:0x2 00:16:01.821 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:01.821 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:01.821 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c9cb026f6652423681040dbc39443bfe 00:16:01.821 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c9cb026f6652423681040dbc39443bfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:01.821 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:01.821 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:02.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.079 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.337 23:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:02.595 23:22:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:02.595 23:22:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 489b0c3a-efad-4cbd-9eca-22b766fb8073 -a 10.0.0.2 -s 4420 -i 4 00:16:02.595 23:22:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:02.595 23:22:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:02.595 23:22:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:02.595 23:22:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:02.595 23:22:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:02.595 23:22:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:05.121 [ 0]:0x2 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c9cb026f6652423681040dbc39443bfe 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c9cb026f6652423681040dbc39443bfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:05.121 [ 0]:0x1 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7eb06ec5ac8c4e9abd6a3300b00cc949 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7eb06ec5ac8c4e9abd6a3300b00cc949 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:05.121 [ 1]:0x2 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:05.121 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.122 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c9cb026f6652423681040dbc39443bfe 00:16:05.122 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c9cb026f6652423681040dbc39443bfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.122 23:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:05.688 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:05.688 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:05.688 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:05.688 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:05.688 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.688 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:05.689 [ 0]:0x2 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c9cb026f6652423681040dbc39443bfe 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c9cb026f6652423681040dbc39443bfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:05.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.689 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:05.946 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:05.946 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 489b0c3a-efad-4cbd-9eca-22b766fb8073 -a 10.0.0.2 -s 4420 -i 4 00:16:06.203 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:06.203 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:06.204 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:06.204 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:06.204 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:06.204 23:22:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:08.107 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:08.107 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:08.107 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:08.107 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:08.107 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.107 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:08.107 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:08.107 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:08.107 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:08.107 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:08.107 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:08.107 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:08.107 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:08.107 [ 0]:0x1 00:16:08.107 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:08.107 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:08.365 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7eb06ec5ac8c4e9abd6a3300b00cc949 00:16:08.365 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7eb06ec5ac8c4e9abd6a3300b00cc949 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:08.365 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:08.365 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:08.365 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:08.365 [ 1]:0x2 00:16:08.365 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:08.365 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:08.365 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c9cb026f6652423681040dbc39443bfe 00:16:08.365 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c9cb026f6652423681040dbc39443bfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:08.365 23:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:08.624 [ 0]:0x2 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c9cb026f6652423681040dbc39443bfe 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c9cb026f6652423681040dbc39443bfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:08.624 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:08.882 [2024-07-25 23:22:06.582908] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:08.882 request: 00:16:08.882 { 00:16:08.882 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:08.882 "nsid": 2, 00:16:08.882 "host": "nqn.2016-06.io.spdk:host1", 00:16:08.882 "method": "nvmf_ns_remove_host", 00:16:08.882 "req_id": 1 00:16:08.882 } 00:16:08.882 Got JSON-RPC error response 00:16:08.882 response: 00:16:08.882 { 00:16:08.882 "code": -32602, 00:16:08.882 "message": "Invalid parameters" 00:16:08.882 } 00:16:08.882 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:08.882 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:08.882 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:08.882 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:08.882 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:08.882 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:08.882 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:08.882 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:09.140 [ 0]:0x2 00:16:09.140 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:09.141 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:09.141 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c9cb026f6652423681040dbc39443bfe 00:16:09.141 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c9cb026f6652423681040dbc39443bfe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:09.141 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:09.141 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:09.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.399 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1371547 00:16:09.399 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:09.399 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.399 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1371547 /var/tmp/host.sock 00:16:09.399 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1371547 ']' 00:16:09.399 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:09.399 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:09.399 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:09.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:09.399 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:09.399 23:22:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:09.399 [2024-07-25 23:22:06.929105] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:09.399 [2024-07-25 23:22:06.929200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1371547 ] 00:16:09.399 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.399 [2024-07-25 23:22:06.961654] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:09.399 [2024-07-25 23:22:06.993570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.399 [2024-07-25 23:22:07.088179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.658 23:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.658 23:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:09.658 23:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.916 23:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:10.482 23:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid a1692ca9-9163-449d-8e4a-5303d8b7f863 00:16:10.482 23:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:10.482 23:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A1692CA99163449D8E4A5303D8B7F863 -i 00:16:10.482 23:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c0e97e97-4499-43de-a54e-6bdeae0a4147 00:16:10.482 23:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:10.740 23:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C0E97E97449943DEA54E6BDEAE0A4147 -i 00:16:10.998 23:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:11.256 23:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:11.514 23:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:11.514 23:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:11.772 nvme0n1 00:16:11.772 23:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:11.772 23:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:12.030 nvme1n2 00:16:12.030 23:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:12.030 23:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:12.030 23:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:12.030 23:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:12.030 23:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:12.288 23:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:12.288 23:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:12.289 23:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:12.289 23:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:12.546 23:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ a1692ca9-9163-449d-8e4a-5303d8b7f863 == \a\1\6\9\2\c\a\9\-\9\1\6\3\-\4\4\9\d\-\8\e\4\a\-\5\3\0\3\d\8\b\7\f\8\6\3 ]] 00:16:12.546 23:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:12.546 23:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:12.546 23:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:12.805 23:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c0e97e97-4499-43de-a54e-6bdeae0a4147 == \c\0\e\9\7\e\9\7\-\4\4\9\9\-\4\3\d\e\-\a\5\4\e\-\6\b\d\e\a\e\0\a\4\1\4\7 ]] 00:16:12.805 23:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1371547 00:16:12.805 23:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1371547 ']' 00:16:12.805 23:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1371547 00:16:12.805 23:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:12.805 23:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:12.805 23:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1371547 00:16:12.805 23:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:12.805 23:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:12.805 23:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1371547' 00:16:12.805 killing process with pid 1371547 00:16:12.805 23:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1371547 00:16:12.805 23:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1371547 00:16:13.371 23:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:13.630 rmmod nvme_tcp 00:16:13.630 rmmod nvme_fabrics 00:16:13.630 rmmod nvme_keyring 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1369931 ']' 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1369931 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1369931 ']' 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1369931 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1369931 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1369931' 00:16:13.630 killing process with pid 1369931 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1369931 00:16:13.630 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1369931 00:16:13.889 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:13.889 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:13.889 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:13.889 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:13.889 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:13.889 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.889 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.889 23:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:16.429 00:16:16.429 real 0m20.983s 00:16:16.429 user 0m27.340s 00:16:16.429 sys 0m4.140s 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:16.429 ************************************ 00:16:16.429 END TEST nvmf_ns_masking 00:16:16.429 ************************************ 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:16.429 ************************************ 00:16:16.429 START TEST nvmf_nvme_cli 00:16:16.429 ************************************ 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:16.429 * Looking for test storage... 00:16:16.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.429 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:16.430 23:22:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:18.338 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:18.338 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:18.338 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:18.338 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:18.338 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:18.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:16:18.339 00:16:18.339 --- 10.0.0.2 ping statistics --- 00:16:18.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.339 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:18.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:16:18.339 00:16:18.339 --- 10.0.0.1 ping statistics --- 00:16:18.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.339 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1374035 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1374035 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1374035 ']' 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:18.339 23:22:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:18.339 [2024-07-25 23:22:15.781226] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:18.339 [2024-07-25 23:22:15.781308] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.339 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.339 [2024-07-25 23:22:15.819669] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:18.339 [2024-07-25 23:22:15.851980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:18.339 [2024-07-25 23:22:15.946694] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.339 [2024-07-25 23:22:15.946758] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.339 [2024-07-25 23:22:15.946784] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.339 [2024-07-25 23:22:15.946798] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.339 [2024-07-25 23:22:15.946809] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.339 [2024-07-25 23:22:15.946892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.339 [2024-07-25 23:22:15.946948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.339 [2024-07-25 23:22:15.947002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:18.339 [2024-07-25 23:22:15.947005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.599 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:18.599 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:16:18.599 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:18.599 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:18.599 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:18.599 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.599 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:18.599 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.599 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:18.599 [2024-07-25 23:22:16.106730] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.599 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.599 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:18.599 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.599 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:18.599 Malloc0 00:16:18.599 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:18.600 Malloc1 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:18.600 [2024-07-25 23:22:16.192601] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:16:18.600 00:16:18.600 Discovery Log Number of Records 2, Generation counter 2 00:16:18.600 =====Discovery Log Entry 0====== 00:16:18.600 trtype: tcp 00:16:18.600 adrfam: ipv4 00:16:18.600 subtype: current discovery subsystem 00:16:18.600 treq: not required 00:16:18.600 portid: 0 00:16:18.600 trsvcid: 4420 00:16:18.600 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:18.600 traddr: 10.0.0.2 00:16:18.600 eflags: explicit discovery connections, duplicate discovery information 00:16:18.600 sectype: none 00:16:18.600 =====Discovery Log Entry 1====== 00:16:18.600 trtype: tcp 00:16:18.600 adrfam: ipv4 00:16:18.600 subtype: nvme subsystem 00:16:18.600 treq: not required 00:16:18.600 portid: 0 00:16:18.600 trsvcid: 4420 00:16:18.600 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:18.600 traddr: 10.0.0.2 00:16:18.600 eflags: none 00:16:18.600 sectype: none 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:18.600 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:19.167 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:19.167 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:19.167 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:19.167 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:19.167 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:19.167 23:22:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:21.703 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:21.704 /dev/nvme0n1 ]] 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:21.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:21.704 23:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:21.704 rmmod nvme_tcp 00:16:21.704 rmmod nvme_fabrics 00:16:21.704 rmmod nvme_keyring 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1374035 ']' 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1374035 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1374035 ']' 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1374035 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1374035 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1374035' 00:16:21.704 killing process with pid 1374035 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1374035 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1374035 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.704 23:22:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:24.247 00:16:24.247 real 0m7.774s 00:16:24.247 user 0m14.099s 00:16:24.247 sys 0m2.099s 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:24.247 ************************************ 00:16:24.247 END TEST nvmf_nvme_cli 00:16:24.247 ************************************ 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:24.247 ************************************ 00:16:24.247 START TEST nvmf_vfio_user 00:16:24.247 ************************************ 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:24.247 * Looking for test storage... 00:16:24.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.247 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1374836 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1374836' 00:16:24.248 Process pid: 1374836 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1374836 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1374836 ']' 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:24.248 [2024-07-25 23:22:21.607749] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:24.248 [2024-07-25 23:22:21.607844] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.248 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.248 [2024-07-25 23:22:21.640151] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:24.248 [2024-07-25 23:22:21.667454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:24.248 [2024-07-25 23:22:21.753959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.248 [2024-07-25 23:22:21.754014] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.248 [2024-07-25 23:22:21.754038] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.248 [2024-07-25 23:22:21.754048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.248 [2024-07-25 23:22:21.754064] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.248 [2024-07-25 23:22:21.754138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.248 [2024-07-25 23:22:21.754200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.248 [2024-07-25 23:22:21.754266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:24.248 [2024-07-25 23:22:21.754268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:24.248 23:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:25.184 23:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:25.442 23:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:25.442 23:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:25.442 23:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:25.442 23:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:25.442 23:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:25.700 Malloc1 00:16:25.700 23:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:25.958 23:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:26.215 23:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:26.473 23:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:26.473 23:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:26.473 23:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:26.758 Malloc2 00:16:26.758 23:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:27.015 23:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:27.272 23:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:27.530 23:22:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:27.530 23:22:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:27.530 23:22:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:27.530 23:22:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:27.530 23:22:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:27.530 23:22:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:27.530 [2024-07-25 23:22:25.215961] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:27.530 [2024-07-25 23:22:25.216006] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1375267 ] 00:16:27.530 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.530 [2024-07-25 23:22:25.231813] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:27.530 [2024-07-25 23:22:25.249368] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:27.791 [2024-07-25 23:22:25.258497] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:27.791 [2024-07-25 23:22:25.258530] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3a69b21000 00:16:27.791 [2024-07-25 23:22:25.259476] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:27.791 [2024-07-25 23:22:25.260473] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:27.791 [2024-07-25 23:22:25.261481] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:27.791 [2024-07-25 23:22:25.262487] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:27.791 [2024-07-25 23:22:25.263495] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:27.791 [2024-07-25 23:22:25.264499] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:27.791 [2024-07-25 23:22:25.265505] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:27.791 [2024-07-25 23:22:25.266510] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:27.791 [2024-07-25 23:22:25.267521] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:27.791 [2024-07-25 23:22:25.267541] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3a688e3000 00:16:27.791 [2024-07-25 23:22:25.268659] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:27.791 [2024-07-25 23:22:25.284247] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:27.791 [2024-07-25 23:22:25.284289] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:27.791 [2024-07-25 23:22:25.286630] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:27.791 [2024-07-25 23:22:25.286685] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:27.791 [2024-07-25 23:22:25.286783] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:27.791 [2024-07-25 23:22:25.286815] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:27.791 [2024-07-25 23:22:25.286826] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:27.791 [2024-07-25 23:22:25.287620] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:27.791 [2024-07-25 23:22:25.287645] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:27.791 [2024-07-25 23:22:25.287658] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:27.791 [2024-07-25 23:22:25.288625] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:27.791 [2024-07-25 23:22:25.288645] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:27.792 [2024-07-25 23:22:25.288658] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:27.792 [2024-07-25 23:22:25.289627] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:27.792 [2024-07-25 23:22:25.289646] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:27.792 [2024-07-25 23:22:25.290636] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:27.792 [2024-07-25 23:22:25.290654] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:27.792 [2024-07-25 23:22:25.290664] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:27.792 [2024-07-25 23:22:25.290676] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:27.792 [2024-07-25 23:22:25.290786] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:27.792 [2024-07-25 23:22:25.290794] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:27.792 [2024-07-25 23:22:25.290804] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:27.792 [2024-07-25 23:22:25.295069] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:27.792 [2024-07-25 23:22:25.295669] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:27.792 [2024-07-25 23:22:25.296673] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:27.792 [2024-07-25 23:22:25.297670] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:27.792 [2024-07-25 23:22:25.297787] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:27.792 [2024-07-25 23:22:25.298691] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:27.792 [2024-07-25 23:22:25.298709] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:27.792 [2024-07-25 23:22:25.298719] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:27.792 [2024-07-25 23:22:25.298743] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:27.792 [2024-07-25 23:22:25.298762] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:27.792 [2024-07-25 23:22:25.298793] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:27.792 [2024-07-25 23:22:25.298803] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:27.792 [2024-07-25 23:22:25.298811] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:27.792 [2024-07-25 23:22:25.298833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:27.792 [2024-07-25 23:22:25.298896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:27.792 [2024-07-25 23:22:25.298915] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:27.792 [2024-07-25 23:22:25.298923] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:27.792 [2024-07-25 23:22:25.298931] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:27.792 [2024-07-25 23:22:25.298940] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:27.792 [2024-07-25 23:22:25.298949] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:27.792 [2024-07-25 23:22:25.298957] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:27.792 [2024-07-25 23:22:25.298965] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:27.792 [2024-07-25 23:22:25.298979] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:27.792 [2024-07-25 23:22:25.298998] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:27.792 [2024-07-25 23:22:25.299014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:27.792 [2024-07-25 23:22:25.299037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.792 [2024-07-25 23:22:25.299074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.792 [2024-07-25 23:22:25.299088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.792 [2024-07-25 23:22:25.299104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.792 [2024-07-25 23:22:25.299117] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:27.792 [2024-07-25 23:22:25.299134] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:27.792 [2024-07-25 23:22:25.299149] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:27.792 [2024-07-25 23:22:25.299162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:27.792 [2024-07-25 23:22:25.299173] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:27.792 [2024-07-25 23:22:25.299183] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:27.792 [2024-07-25 23:22:25.299200] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:27.792 [2024-07-25 23:22:25.299212] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:27.792 [2024-07-25 23:22:25.299226] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:27.792 [2024-07-25 23:22:25.299239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:27.792 [2024-07-25 23:22:25.299305] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:27.792 [2024-07-25 23:22:25.299321] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:27.792 [2024-07-25 23:22:25.299336] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:27.792 [2024-07-25 23:22:25.299345] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:27.792 [2024-07-25 23:22:25.299366] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:27.792 [2024-07-25 23:22:25.299376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:27.792 [2024-07-25 23:22:25.299391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:27.792 [2024-07-25 23:22:25.299411] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:27.792 [2024-07-25 23:22:25.299427] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:27.792 [2024-07-25 23:22:25.299443] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:27.792 [2024-07-25 23:22:25.299455] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:27.792 [2024-07-25 23:22:25.299463] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:27.792 [2024-07-25 23:22:25.299470] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:27.792 [2024-07-25 23:22:25.299480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:27.792 [2024-07-25 23:22:25.299503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:27.793 [2024-07-25 23:22:25.299530] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:27.793 [2024-07-25 23:22:25.299546] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:27.793 [2024-07-25 23:22:25.299558] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:27.793 [2024-07-25 23:22:25.299566] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:27.793 [2024-07-25 23:22:25.299573] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:27.793 [2024-07-25 23:22:25.299583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:27.793 [2024-07-25 23:22:25.299601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:27.793 [2024-07-25 23:22:25.299616] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:27.793 [2024-07-25 23:22:25.299628] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:27.793 [2024-07-25 23:22:25.299642] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:27.793 [2024-07-25 23:22:25.299657] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:27.793 [2024-07-25 23:22:25.299666] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:27.793 [2024-07-25 23:22:25.299676] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:27.793 [2024-07-25 23:22:25.299686] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:27.793 [2024-07-25 23:22:25.299694] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:27.793 [2024-07-25 23:22:25.299703] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:27.793 [2024-07-25 23:22:25.299733] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:27.793 [2024-07-25 23:22:25.299752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:27.793 [2024-07-25 23:22:25.299771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:27.793 [2024-07-25 23:22:25.299786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:27.793 [2024-07-25 23:22:25.299803] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:27.793 [2024-07-25 23:22:25.299814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:27.793 [2024-07-25 23:22:25.299831] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:27.793 [2024-07-25 23:22:25.299843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:27.793 [2024-07-25 23:22:25.299865] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:27.793 [2024-07-25 23:22:25.299879] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:27.793 [2024-07-25 23:22:25.299887] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:27.793 [2024-07-25 23:22:25.299893] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:27.793 [2024-07-25 23:22:25.299900] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:27.793 [2024-07-25 23:22:25.299910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:27.793 [2024-07-25 23:22:25.299922] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:27.793 [2024-07-25 23:22:25.299930] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:27.793 [2024-07-25 23:22:25.299937] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:27.793 [2024-07-25 23:22:25.299946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:27.793 [2024-07-25 23:22:25.299957] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:27.793 [2024-07-25 23:22:25.299965] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:27.793 [2024-07-25 23:22:25.299972] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:27.793 [2024-07-25 23:22:25.299981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:27.793 [2024-07-25 23:22:25.299993] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:27.793 [2024-07-25 23:22:25.300002] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:27.793 [2024-07-25 23:22:25.300008] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:27.793 [2024-07-25 23:22:25.300017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:27.793 [2024-07-25 23:22:25.300029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:27.793 [2024-07-25 23:22:25.300071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:27.793 [2024-07-25 23:22:25.300092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:27.793 [2024-07-25 23:22:25.300105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:27.793 ===================================================== 00:16:27.793 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:27.793 ===================================================== 00:16:27.793 Controller Capabilities/Features 00:16:27.793 ================================ 00:16:27.793 Vendor ID: 4e58 00:16:27.793 Subsystem Vendor ID: 4e58 00:16:27.793 Serial Number: SPDK1 00:16:27.793 Model Number: SPDK bdev Controller 00:16:27.793 Firmware Version: 24.09 00:16:27.793 Recommended Arb Burst: 6 00:16:27.793 IEEE OUI Identifier: 8d 6b 50 00:16:27.793 Multi-path I/O 00:16:27.793 May have multiple subsystem ports: Yes 00:16:27.793 May have multiple controllers: Yes 00:16:27.793 Associated with SR-IOV VF: No 00:16:27.793 Max Data Transfer Size: 131072 00:16:27.793 Max Number of Namespaces: 32 00:16:27.793 Max Number of I/O Queues: 127 00:16:27.793 NVMe Specification Version (VS): 1.3 00:16:27.793 NVMe Specification Version (Identify): 1.3 00:16:27.793 Maximum Queue Entries: 256 00:16:27.793 Contiguous Queues Required: Yes 00:16:27.793 Arbitration Mechanisms Supported 00:16:27.793 Weighted Round Robin: Not Supported 00:16:27.793 Vendor Specific: Not Supported 00:16:27.793 Reset Timeout: 15000 ms 00:16:27.793 Doorbell Stride: 4 bytes 00:16:27.793 NVM Subsystem Reset: Not Supported 00:16:27.793 Command Sets Supported 00:16:27.793 NVM Command Set: Supported 00:16:27.793 Boot Partition: Not Supported 00:16:27.793 Memory Page Size Minimum: 4096 bytes 00:16:27.793 Memory Page Size Maximum: 4096 bytes 00:16:27.793 Persistent Memory Region: Not Supported 00:16:27.793 Optional Asynchronous Events Supported 00:16:27.793 Namespace Attribute Notices: Supported 00:16:27.793 Firmware Activation Notices: Not Supported 00:16:27.793 ANA Change Notices: Not Supported 00:16:27.793 PLE Aggregate Log Change Notices: Not Supported 00:16:27.793 LBA Status Info Alert Notices: Not Supported 00:16:27.793 EGE Aggregate Log Change Notices: Not Supported 00:16:27.793 Normal NVM Subsystem Shutdown event: Not Supported 00:16:27.793 Zone Descriptor Change Notices: Not Supported 00:16:27.793 Discovery Log Change Notices: Not Supported 00:16:27.793 Controller Attributes 00:16:27.793 128-bit Host Identifier: Supported 00:16:27.793 Non-Operational Permissive Mode: Not Supported 00:16:27.793 NVM Sets: Not Supported 00:16:27.793 Read Recovery Levels: Not Supported 00:16:27.793 Endurance Groups: Not Supported 00:16:27.793 Predictable Latency Mode: Not Supported 00:16:27.794 Traffic Based Keep ALive: Not Supported 00:16:27.794 Namespace Granularity: Not Supported 00:16:27.794 SQ Associations: Not Supported 00:16:27.794 UUID List: Not Supported 00:16:27.794 Multi-Domain Subsystem: Not Supported 00:16:27.794 Fixed Capacity Management: Not Supported 00:16:27.794 Variable Capacity Management: Not Supported 00:16:27.794 Delete Endurance Group: Not Supported 00:16:27.794 Delete NVM Set: Not Supported 00:16:27.794 Extended LBA Formats Supported: Not Supported 00:16:27.794 Flexible Data Placement Supported: Not Supported 00:16:27.794 00:16:27.794 Controller Memory Buffer Support 00:16:27.794 ================================ 00:16:27.794 Supported: No 00:16:27.794 00:16:27.794 Persistent Memory Region Support 00:16:27.794 ================================ 00:16:27.794 Supported: No 00:16:27.794 00:16:27.794 Admin Command Set Attributes 00:16:27.794 ============================ 00:16:27.794 Security Send/Receive: Not Supported 00:16:27.794 Format NVM: Not Supported 00:16:27.794 Firmware Activate/Download: Not Supported 00:16:27.794 Namespace Management: Not Supported 00:16:27.794 Device Self-Test: Not Supported 00:16:27.794 Directives: Not Supported 00:16:27.794 NVMe-MI: Not Supported 00:16:27.794 Virtualization Management: Not Supported 00:16:27.794 Doorbell Buffer Config: Not Supported 00:16:27.794 Get LBA Status Capability: Not Supported 00:16:27.794 Command & Feature Lockdown Capability: Not Supported 00:16:27.794 Abort Command Limit: 4 00:16:27.794 Async Event Request Limit: 4 00:16:27.794 Number of Firmware Slots: N/A 00:16:27.794 Firmware Slot 1 Read-Only: N/A 00:16:27.794 Firmware Activation Without Reset: N/A 00:16:27.794 Multiple Update Detection Support: N/A 00:16:27.794 Firmware Update Granularity: No Information Provided 00:16:27.794 Per-Namespace SMART Log: No 00:16:27.794 Asymmetric Namespace Access Log Page: Not Supported 00:16:27.794 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:27.794 Command Effects Log Page: Supported 00:16:27.794 Get Log Page Extended Data: Supported 00:16:27.794 Telemetry Log Pages: Not Supported 00:16:27.794 Persistent Event Log Pages: Not Supported 00:16:27.794 Supported Log Pages Log Page: May Support 00:16:27.794 Commands Supported & Effects Log Page: Not Supported 00:16:27.794 Feature Identifiers & Effects Log Page:May Support 00:16:27.794 NVMe-MI Commands & Effects Log Page: May Support 00:16:27.794 Data Area 4 for Telemetry Log: Not Supported 00:16:27.794 Error Log Page Entries Supported: 128 00:16:27.794 Keep Alive: Supported 00:16:27.794 Keep Alive Granularity: 10000 ms 00:16:27.794 00:16:27.794 NVM Command Set Attributes 00:16:27.794 ========================== 00:16:27.794 Submission Queue Entry Size 00:16:27.794 Max: 64 00:16:27.794 Min: 64 00:16:27.794 Completion Queue Entry Size 00:16:27.794 Max: 16 00:16:27.794 Min: 16 00:16:27.794 Number of Namespaces: 32 00:16:27.794 Compare Command: Supported 00:16:27.794 Write Uncorrectable Command: Not Supported 00:16:27.794 Dataset Management Command: Supported 00:16:27.794 Write Zeroes Command: Supported 00:16:27.794 Set Features Save Field: Not Supported 00:16:27.794 Reservations: Not Supported 00:16:27.794 Timestamp: Not Supported 00:16:27.794 Copy: Supported 00:16:27.794 Volatile Write Cache: Present 00:16:27.794 Atomic Write Unit (Normal): 1 00:16:27.794 Atomic Write Unit (PFail): 1 00:16:27.794 Atomic Compare & Write Unit: 1 00:16:27.794 Fused Compare & Write: Supported 00:16:27.794 Scatter-Gather List 00:16:27.794 SGL Command Set: Supported (Dword aligned) 00:16:27.794 SGL Keyed: Not Supported 00:16:27.794 SGL Bit Bucket Descriptor: Not Supported 00:16:27.794 SGL Metadata Pointer: Not Supported 00:16:27.794 Oversized SGL: Not Supported 00:16:27.794 SGL Metadata Address: Not Supported 00:16:27.794 SGL Offset: Not Supported 00:16:27.794 Transport SGL Data Block: Not Supported 00:16:27.794 Replay Protected Memory Block: Not Supported 00:16:27.794 00:16:27.794 Firmware Slot Information 00:16:27.794 ========================= 00:16:27.794 Active slot: 1 00:16:27.794 Slot 1 Firmware Revision: 24.09 00:16:27.794 00:16:27.794 00:16:27.794 Commands Supported and Effects 00:16:27.794 ============================== 00:16:27.794 Admin Commands 00:16:27.794 -------------- 00:16:27.794 Get Log Page (02h): Supported 00:16:27.794 Identify (06h): Supported 00:16:27.794 Abort (08h): Supported 00:16:27.794 Set Features (09h): Supported 00:16:27.794 Get Features (0Ah): Supported 00:16:27.794 Asynchronous Event Request (0Ch): Supported 00:16:27.794 Keep Alive (18h): Supported 00:16:27.794 I/O Commands 00:16:27.794 ------------ 00:16:27.794 Flush (00h): Supported LBA-Change 00:16:27.794 Write (01h): Supported LBA-Change 00:16:27.794 Read (02h): Supported 00:16:27.794 Compare (05h): Supported 00:16:27.794 Write Zeroes (08h): Supported LBA-Change 00:16:27.794 Dataset Management (09h): Supported LBA-Change 00:16:27.794 Copy (19h): Supported LBA-Change 00:16:27.794 00:16:27.794 Error Log 00:16:27.794 ========= 00:16:27.794 00:16:27.794 Arbitration 00:16:27.794 =========== 00:16:27.794 Arbitration Burst: 1 00:16:27.794 00:16:27.794 Power Management 00:16:27.794 ================ 00:16:27.794 Number of Power States: 1 00:16:27.794 Current Power State: Power State #0 00:16:27.794 Power State #0: 00:16:27.794 Max Power: 0.00 W 00:16:27.794 Non-Operational State: Operational 00:16:27.794 Entry Latency: Not Reported 00:16:27.794 Exit Latency: Not Reported 00:16:27.794 Relative Read Throughput: 0 00:16:27.794 Relative Read Latency: 0 00:16:27.794 Relative Write Throughput: 0 00:16:27.794 Relative Write Latency: 0 00:16:27.794 Idle Power: Not Reported 00:16:27.794 Active Power: Not Reported 00:16:27.794 Non-Operational Permissive Mode: Not Supported 00:16:27.794 00:16:27.794 Health Information 00:16:27.794 ================== 00:16:27.794 Critical Warnings: 00:16:27.794 Available Spare Space: OK 00:16:27.794 Temperature: OK 00:16:27.794 Device Reliability: OK 00:16:27.794 Read Only: No 00:16:27.794 Volatile Memory Backup: OK 00:16:27.794 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:27.794 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:27.794 Available Spare: 0% 00:16:27.794 Available Sp[2024-07-25 23:22:25.300235] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:27.794 [2024-07-25 23:22:25.300252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:27.794 [2024-07-25 23:22:25.300294] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:27.794 [2024-07-25 23:22:25.300312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.794 [2024-07-25 23:22:25.300325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.794 [2024-07-25 23:22:25.300336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.794 [2024-07-25 23:22:25.300346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.794 [2024-07-25 23:22:25.300694] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:27.794 [2024-07-25 23:22:25.300721] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:27.794 [2024-07-25 23:22:25.301691] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:27.794 [2024-07-25 23:22:25.301761] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:27.795 [2024-07-25 23:22:25.301786] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:27.795 [2024-07-25 23:22:25.302702] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:27.795 [2024-07-25 23:22:25.302725] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:27.795 [2024-07-25 23:22:25.302781] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:27.795 [2024-07-25 23:22:25.304743] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:27.795 are Threshold: 0% 00:16:27.795 Life Percentage Used: 0% 00:16:27.795 Data Units Read: 0 00:16:27.795 Data Units Written: 0 00:16:27.795 Host Read Commands: 0 00:16:27.795 Host Write Commands: 0 00:16:27.795 Controller Busy Time: 0 minutes 00:16:27.795 Power Cycles: 0 00:16:27.795 Power On Hours: 0 hours 00:16:27.795 Unsafe Shutdowns: 0 00:16:27.795 Unrecoverable Media Errors: 0 00:16:27.795 Lifetime Error Log Entries: 0 00:16:27.795 Warning Temperature Time: 0 minutes 00:16:27.795 Critical Temperature Time: 0 minutes 00:16:27.795 00:16:27.795 Number of Queues 00:16:27.795 ================ 00:16:27.795 Number of I/O Submission Queues: 127 00:16:27.795 Number of I/O Completion Queues: 127 00:16:27.795 00:16:27.795 Active Namespaces 00:16:27.795 ================= 00:16:27.795 Namespace ID:1 00:16:27.795 Error Recovery Timeout: Unlimited 00:16:27.795 Command Set Identifier: NVM (00h) 00:16:27.795 Deallocate: Supported 00:16:27.795 Deallocated/Unwritten Error: Not Supported 00:16:27.795 Deallocated Read Value: Unknown 00:16:27.795 Deallocate in Write Zeroes: Not Supported 00:16:27.795 Deallocated Guard Field: 0xFFFF 00:16:27.795 Flush: Supported 00:16:27.795 Reservation: Supported 00:16:27.795 Namespace Sharing Capabilities: Multiple Controllers 00:16:27.795 Size (in LBAs): 131072 (0GiB) 00:16:27.795 Capacity (in LBAs): 131072 (0GiB) 00:16:27.795 Utilization (in LBAs): 131072 (0GiB) 00:16:27.795 NGUID: A1FD4CA47588441B8AAE0E0466DCB675 00:16:27.795 UUID: a1fd4ca4-7588-441b-8aae-0e0466dcb675 00:16:27.795 Thin Provisioning: Not Supported 00:16:27.795 Per-NS Atomic Units: Yes 00:16:27.795 Atomic Boundary Size (Normal): 0 00:16:27.795 Atomic Boundary Size (PFail): 0 00:16:27.795 Atomic Boundary Offset: 0 00:16:27.795 Maximum Single Source Range Length: 65535 00:16:27.795 Maximum Copy Length: 65535 00:16:27.795 Maximum Source Range Count: 1 00:16:27.795 NGUID/EUI64 Never Reused: No 00:16:27.795 Namespace Write Protected: No 00:16:27.795 Number of LBA Formats: 1 00:16:27.795 Current LBA Format: LBA Format #00 00:16:27.795 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:27.795 00:16:27.795 23:22:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:27.795 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.052 [2024-07-25 23:22:25.534924] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:33.320 Initializing NVMe Controllers 00:16:33.321 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:33.321 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:33.321 Initialization complete. Launching workers. 00:16:33.321 ======================================================== 00:16:33.321 Latency(us) 00:16:33.321 Device Information : IOPS MiB/s Average min max 00:16:33.321 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34550.44 134.96 3704.14 1181.71 8998.94 00:16:33.321 ======================================================== 00:16:33.321 Total : 34550.44 134.96 3704.14 1181.71 8998.94 00:16:33.321 00:16:33.321 [2024-07-25 23:22:30.561774] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:33.321 23:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:33.321 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.321 [2024-07-25 23:22:30.791834] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:38.606 Initializing NVMe Controllers 00:16:38.606 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:38.607 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:38.607 Initialization complete. Launching workers. 00:16:38.607 ======================================================== 00:16:38.607 Latency(us) 00:16:38.607 Device Information : IOPS MiB/s Average min max 00:16:38.607 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16057.33 62.72 7976.72 6970.92 9003.93 00:16:38.607 ======================================================== 00:16:38.607 Total : 16057.33 62.72 7976.72 6970.92 9003.93 00:16:38.607 00:16:38.607 [2024-07-25 23:22:35.832262] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:38.607 23:22:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:38.607 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.607 [2024-07-25 23:22:36.046379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:43.879 [2024-07-25 23:22:41.110380] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:43.879 Initializing NVMe Controllers 00:16:43.879 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:43.879 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:43.879 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:43.879 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:43.879 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:43.879 Initialization complete. Launching workers. 00:16:43.879 Starting thread on core 2 00:16:43.879 Starting thread on core 3 00:16:43.879 Starting thread on core 1 00:16:43.879 23:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:43.879 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.879 [2024-07-25 23:22:41.417514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:47.188 [2024-07-25 23:22:44.570800] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:47.188 Initializing NVMe Controllers 00:16:47.188 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:47.188 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:47.188 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:47.188 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:47.188 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:47.188 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:47.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:47.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:47.188 Initialization complete. Launching workers. 00:16:47.188 Starting thread on core 1 with urgent priority queue 00:16:47.188 Starting thread on core 2 with urgent priority queue 00:16:47.188 Starting thread on core 3 with urgent priority queue 00:16:47.188 Starting thread on core 0 with urgent priority queue 00:16:47.188 SPDK bdev Controller (SPDK1 ) core 0: 4143.00 IO/s 24.14 secs/100000 ios 00:16:47.188 SPDK bdev Controller (SPDK1 ) core 1: 4400.33 IO/s 22.73 secs/100000 ios 00:16:47.188 SPDK bdev Controller (SPDK1 ) core 2: 4404.33 IO/s 22.70 secs/100000 ios 00:16:47.188 SPDK bdev Controller (SPDK1 ) core 3: 4074.00 IO/s 24.55 secs/100000 ios 00:16:47.189 ======================================================== 00:16:47.189 00:16:47.189 23:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:47.189 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.189 [2024-07-25 23:22:44.878581] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:47.189 Initializing NVMe Controllers 00:16:47.189 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:47.189 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:47.189 Namespace ID: 1 size: 0GB 00:16:47.189 Initialization complete. 00:16:47.189 INFO: using host memory buffer for IO 00:16:47.189 Hello world! 00:16:47.189 [2024-07-25 23:22:44.913091] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:47.447 23:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:47.447 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.706 [2024-07-25 23:22:45.201436] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:48.646 Initializing NVMe Controllers 00:16:48.646 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:48.646 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:48.646 Initialization complete. Launching workers. 00:16:48.646 submit (in ns) avg, min, max = 5734.2, 3540.0, 4016653.3 00:16:48.646 complete (in ns) avg, min, max = 26476.0, 2068.9, 5010958.9 00:16:48.646 00:16:48.646 Submit histogram 00:16:48.646 ================ 00:16:48.646 Range in us Cumulative Count 00:16:48.646 3.532 - 3.556: 0.0897% ( 12) 00:16:48.646 3.556 - 3.579: 0.7620% ( 90) 00:16:48.646 3.579 - 3.603: 1.9724% ( 162) 00:16:48.646 3.603 - 3.627: 7.0527% ( 680) 00:16:48.646 3.627 - 3.650: 13.4329% ( 854) 00:16:48.646 3.650 - 3.674: 23.6160% ( 1363) 00:16:48.646 3.674 - 3.698: 33.1416% ( 1275) 00:16:48.646 3.698 - 3.721: 43.5786% ( 1397) 00:16:48.646 3.721 - 3.745: 50.0261% ( 863) 00:16:48.646 3.745 - 3.769: 55.3754% ( 716) 00:16:48.646 3.769 - 3.793: 59.4845% ( 550) 00:16:48.646 3.793 - 3.816: 63.2947% ( 510) 00:16:48.646 3.816 - 3.840: 66.6791% ( 453) 00:16:48.646 3.840 - 3.864: 69.9963% ( 444) 00:16:48.646 3.864 - 3.887: 73.8737% ( 519) 00:16:48.646 3.887 - 3.911: 77.7512% ( 519) 00:16:48.646 3.911 - 3.935: 81.5913% ( 514) 00:16:48.646 3.935 - 3.959: 84.8412% ( 435) 00:16:48.646 3.959 - 3.982: 87.3291% ( 333) 00:16:48.646 3.982 - 4.006: 89.1969% ( 250) 00:16:48.646 4.006 - 4.030: 90.7060% ( 202) 00:16:48.646 4.030 - 4.053: 92.0433% ( 179) 00:16:48.646 4.053 - 4.077: 93.1864% ( 153) 00:16:48.646 4.077 - 4.101: 94.1352% ( 127) 00:16:48.646 4.101 - 4.124: 94.9496% ( 109) 00:16:48.646 4.124 - 4.148: 95.5473% ( 80) 00:16:48.646 4.148 - 4.172: 96.0254% ( 64) 00:16:48.646 4.172 - 4.196: 96.3840% ( 48) 00:16:48.646 4.196 - 4.219: 96.6306% ( 33) 00:16:48.646 4.219 - 4.243: 96.8771% ( 33) 00:16:48.646 4.243 - 4.267: 97.0938% ( 29) 00:16:48.646 4.267 - 4.290: 97.2208% ( 17) 00:16:48.646 4.290 - 4.314: 97.2955% ( 10) 00:16:48.646 4.314 - 4.338: 97.4075% ( 15) 00:16:48.646 4.338 - 4.361: 97.4823% ( 10) 00:16:48.646 4.361 - 4.385: 97.5346% ( 7) 00:16:48.646 4.385 - 4.409: 97.5719% ( 5) 00:16:48.646 4.409 - 4.433: 97.6541% ( 11) 00:16:48.646 4.433 - 4.456: 97.6914% ( 5) 00:16:48.647 4.456 - 4.480: 97.7139% ( 3) 00:16:48.647 4.480 - 4.504: 97.7512% ( 5) 00:16:48.647 4.504 - 4.527: 97.7662% ( 2) 00:16:48.647 4.622 - 4.646: 97.7811% ( 2) 00:16:48.647 4.670 - 4.693: 97.7960% ( 2) 00:16:48.647 4.693 - 4.717: 97.8259% ( 4) 00:16:48.647 4.717 - 4.741: 97.8708% ( 6) 00:16:48.647 4.741 - 4.764: 97.8857% ( 2) 00:16:48.647 4.764 - 4.788: 97.9156% ( 4) 00:16:48.647 4.788 - 4.812: 97.9380% ( 3) 00:16:48.647 4.812 - 4.836: 97.9828% ( 6) 00:16:48.647 4.836 - 4.859: 98.0426% ( 8) 00:16:48.647 4.859 - 4.883: 98.1472% ( 14) 00:16:48.647 4.883 - 4.907: 98.1696% ( 3) 00:16:48.647 4.907 - 4.930: 98.2144% ( 6) 00:16:48.647 4.930 - 4.954: 98.2368% ( 3) 00:16:48.647 4.954 - 4.978: 98.2667% ( 4) 00:16:48.647 4.978 - 5.001: 98.3489% ( 11) 00:16:48.647 5.001 - 5.025: 98.3788% ( 4) 00:16:48.647 5.025 - 5.049: 98.4012% ( 3) 00:16:48.647 5.049 - 5.073: 98.4311% ( 4) 00:16:48.647 5.073 - 5.096: 98.4535% ( 3) 00:16:48.647 5.096 - 5.120: 98.4684% ( 2) 00:16:48.647 5.120 - 5.144: 98.4834% ( 2) 00:16:48.647 5.144 - 5.167: 98.4908% ( 1) 00:16:48.647 5.167 - 5.191: 98.4983% ( 1) 00:16:48.647 5.191 - 5.215: 98.5282% ( 4) 00:16:48.647 5.215 - 5.239: 98.5357% ( 1) 00:16:48.647 5.239 - 5.262: 98.5431% ( 1) 00:16:48.647 5.262 - 5.286: 98.5581% ( 2) 00:16:48.647 5.357 - 5.381: 98.5656% ( 1) 00:16:48.647 5.523 - 5.547: 98.5730% ( 1) 00:16:48.647 5.594 - 5.618: 98.5805% ( 1) 00:16:48.647 5.641 - 5.665: 98.5880% ( 1) 00:16:48.647 5.665 - 5.689: 98.5954% ( 1) 00:16:48.647 6.044 - 6.068: 98.6104% ( 2) 00:16:48.647 6.116 - 6.163: 98.6179% ( 1) 00:16:48.647 6.163 - 6.210: 98.6253% ( 1) 00:16:48.647 6.447 - 6.495: 98.6328% ( 1) 00:16:48.647 6.542 - 6.590: 98.6403% ( 1) 00:16:48.647 6.779 - 6.827: 98.6477% ( 1) 00:16:48.647 6.827 - 6.874: 98.6552% ( 1) 00:16:48.647 6.921 - 6.969: 98.6627% ( 1) 00:16:48.647 6.969 - 7.016: 98.6702% ( 1) 00:16:48.647 7.016 - 7.064: 98.6851% ( 2) 00:16:48.647 7.064 - 7.111: 98.6926% ( 1) 00:16:48.647 7.159 - 7.206: 98.7075% ( 2) 00:16:48.647 7.253 - 7.301: 98.7150% ( 1) 00:16:48.647 7.301 - 7.348: 98.7523% ( 5) 00:16:48.647 7.396 - 7.443: 98.7598% ( 1) 00:16:48.647 7.490 - 7.538: 98.7673% ( 1) 00:16:48.647 7.585 - 7.633: 98.7747% ( 1) 00:16:48.647 7.680 - 7.727: 98.7897% ( 2) 00:16:48.647 7.727 - 7.775: 98.7972% ( 1) 00:16:48.647 7.822 - 7.870: 98.8046% ( 1) 00:16:48.647 7.917 - 7.964: 98.8345% ( 4) 00:16:48.647 7.964 - 8.012: 98.8495% ( 2) 00:16:48.647 8.107 - 8.154: 98.8644% ( 2) 00:16:48.647 8.154 - 8.201: 98.8719% ( 1) 00:16:48.647 8.296 - 8.344: 98.8793% ( 1) 00:16:48.647 8.439 - 8.486: 98.8868% ( 1) 00:16:48.647 8.533 - 8.581: 98.9092% ( 3) 00:16:48.647 8.581 - 8.628: 98.9242% ( 2) 00:16:48.647 8.628 - 8.676: 98.9316% ( 1) 00:16:48.647 8.676 - 8.723: 98.9391% ( 1) 00:16:48.647 8.865 - 8.913: 98.9466% ( 1) 00:16:48.647 8.960 - 9.007: 98.9541% ( 1) 00:16:48.647 9.007 - 9.055: 98.9615% ( 1) 00:16:48.647 9.197 - 9.244: 98.9690% ( 1) 00:16:48.647 9.244 - 9.292: 98.9765% ( 1) 00:16:48.647 9.434 - 9.481: 98.9839% ( 1) 00:16:48.647 9.813 - 9.861: 98.9914% ( 1) 00:16:48.647 9.861 - 9.908: 98.9989% ( 1) 00:16:48.647 9.908 - 9.956: 99.0064% ( 1) 00:16:48.647 10.193 - 10.240: 99.0138% ( 1) 00:16:48.647 10.287 - 10.335: 99.0213% ( 1) 00:16:48.647 10.430 - 10.477: 99.0288% ( 1) 00:16:48.647 10.524 - 10.572: 99.0362% ( 1) 00:16:48.647 10.951 - 10.999: 99.0512% ( 2) 00:16:48.647 11.093 - 11.141: 99.0586% ( 1) 00:16:48.647 11.188 - 11.236: 99.0736% ( 2) 00:16:48.647 11.236 - 11.283: 99.0885% ( 2) 00:16:48.647 11.378 - 11.425: 99.0960% ( 1) 00:16:48.647 11.473 - 11.520: 99.1109% ( 2) 00:16:48.647 11.947 - 11.994: 99.1184% ( 1) 00:16:48.647 12.136 - 12.231: 99.1334% ( 2) 00:16:48.647 12.231 - 12.326: 99.1408% ( 1) 00:16:48.647 12.326 - 12.421: 99.1483% ( 1) 00:16:48.647 12.421 - 12.516: 99.1558% ( 1) 00:16:48.647 12.610 - 12.705: 99.1632% ( 1) 00:16:48.647 13.084 - 13.179: 99.1782% ( 2) 00:16:48.647 13.179 - 13.274: 99.1857% ( 1) 00:16:48.647 13.653 - 13.748: 99.1931% ( 1) 00:16:48.647 13.843 - 13.938: 99.2006% ( 1) 00:16:48.647 13.938 - 14.033: 99.2155% ( 2) 00:16:48.647 14.317 - 14.412: 99.2230% ( 1) 00:16:48.647 14.696 - 14.791: 99.2305% ( 1) 00:16:48.647 15.265 - 15.360: 99.2380% ( 1) 00:16:48.647 15.360 - 15.455: 99.2454% ( 1) 00:16:48.647 17.161 - 17.256: 99.2529% ( 1) 00:16:48.647 17.446 - 17.541: 99.2678% ( 2) 00:16:48.647 17.541 - 17.636: 99.2977% ( 4) 00:16:48.647 17.636 - 17.730: 99.3276% ( 4) 00:16:48.647 17.730 - 17.825: 99.3425% ( 2) 00:16:48.647 17.825 - 17.920: 99.3500% ( 1) 00:16:48.647 17.920 - 18.015: 99.3724% ( 3) 00:16:48.647 18.015 - 18.110: 99.4023% ( 4) 00:16:48.647 18.110 - 18.204: 99.4920% ( 12) 00:16:48.647 18.204 - 18.299: 99.6040% ( 15) 00:16:48.647 18.299 - 18.394: 99.6190% ( 2) 00:16:48.647 18.394 - 18.489: 99.6563% ( 5) 00:16:48.647 18.489 - 18.584: 99.7086% ( 7) 00:16:48.647 18.584 - 18.679: 99.7460% ( 5) 00:16:48.647 18.679 - 18.773: 99.7609% ( 2) 00:16:48.647 18.773 - 18.868: 99.7759% ( 2) 00:16:48.647 18.868 - 18.963: 99.8132% ( 5) 00:16:48.647 18.963 - 19.058: 99.8356% ( 3) 00:16:48.647 19.058 - 19.153: 99.8581% ( 3) 00:16:48.647 19.153 - 19.247: 99.8805% ( 3) 00:16:48.647 19.247 - 19.342: 99.8879% ( 1) 00:16:48.647 19.342 - 19.437: 99.8954% ( 1) 00:16:48.647 19.437 - 19.532: 99.9029% ( 1) 00:16:48.647 19.532 - 19.627: 99.9103% ( 1) 00:16:48.647 19.721 - 19.816: 99.9253% ( 2) 00:16:48.647 20.101 - 20.196: 99.9328% ( 1) 00:16:48.647 25.600 - 25.790: 99.9402% ( 1) 00:16:48.648 26.927 - 27.117: 99.9477% ( 1) 00:16:48.648 30.341 - 30.530: 99.9552% ( 1) 00:16:48.648 3980.705 - 4004.978: 99.9851% ( 4) 00:16:48.648 4004.978 - 4029.250: 100.0000% ( 2) 00:16:48.648 00:16:48.648 Complete histogram 00:16:48.648 ================== 00:16:48.648 Range in us Cumulative Count 00:16:48.648 2.062 - 2.074: 0.4856% ( 65) 00:16:48.648 2.074 - 2.086: 30.3549% ( 3998) 00:16:48.648 2.086 - 2.098: 49.5405% ( 2568) 00:16:48.648 2.098 - 2.110: 51.0049% ( 196) 00:16:48.648 2.110 - 2.121: 60.2167% ( 1233) 00:16:48.648 2.121 - 2.133: 63.2051% ( 400) 00:16:48.648 2.133 - 2.145: 65.9470% ( 367) 00:16:48.648 2.145 - 2.157: 75.2783% ( 1249) 00:16:48.648 2.157 - 2.169: 78.0127% ( 366) 00:16:48.648 2.169 - 2.181: 79.4397% ( 191) 00:16:48.648 2.181 - 2.193: 83.5487% ( 550) 00:16:48.648 2.193 - 2.204: 84.7815% ( 165) 00:16:48.648 2.204 - 2.216: 85.5435% ( 102) 00:16:48.648 2.216 - 2.228: 89.5106% ( 531) 00:16:48.648 2.228 - 2.240: 92.1554% ( 354) 00:16:48.648 2.240 - 2.252: 92.9324% ( 104) 00:16:48.648 2.252 - 2.264: 94.0381% ( 148) 00:16:48.648 2.264 - 2.276: 94.5013% ( 62) 00:16:48.648 2.276 - 2.287: 94.6956% ( 26) 00:16:48.648 2.287 - 2.299: 95.1588% ( 62) 00:16:48.648 2.299 - 2.311: 95.6220% ( 62) 00:16:48.648 2.311 - 2.323: 95.9357% ( 42) 00:16:48.648 2.323 - 2.335: 96.0179% ( 11) 00:16:48.648 2.335 - 2.347: 96.0628% ( 6) 00:16:48.648 2.347 - 2.359: 96.2122% ( 20) 00:16:48.648 2.359 - 2.370: 96.5185% ( 41) 00:16:48.648 2.370 - 2.382: 96.9369% ( 56) 00:16:48.648 2.382 - 2.394: 97.3851% ( 60) 00:16:48.648 2.394 - 2.406: 97.7886% ( 54) 00:16:48.648 2.406 - 2.418: 97.9455% ( 21) 00:16:48.648 2.418 - 2.430: 98.0725% ( 17) 00:16:48.648 2.430 - 2.441: 98.1547% ( 11) 00:16:48.648 2.441 - 2.453: 98.2144% ( 8) 00:16:48.648 2.453 - 2.465: 98.3265% ( 15) 00:16:48.648 2.465 - 2.477: 98.4087% ( 11) 00:16:48.648 2.477 - 2.489: 98.4684% ( 8) 00:16:48.648 2.489 - 2.501: 98.5357% ( 9) 00:16:48.648 2.501 - 2.513: 98.5656% ( 4) 00:16:48.648 2.513 - 2.524: 98.5805% ( 2) 00:16:48.648 2.524 - 2.536: 98.6029% ( 3) 00:16:48.648 2.536 - 2.548: 98.6179% ( 2) 00:16:48.648 2.607 - 2.619: 98.6253% ( 1) 00:16:48.648 2.631 - 2.643: 98.6328% ( 1) 00:16:48.648 2.643 - 2.655: 98.6403% ( 1) 00:16:48.648 2.667 - 2.679: 98.6552% ( 2) 00:16:48.648 3.200 - 3.224: 98.6627% ( 1) 00:16:48.648 3.247 - 3.271: 98.6776% ( 2) 00:16:48.648 3.271 - 3.295: 98.6926% ( 2) 00:16:48.648 3.295 - 3.319: 98.7000% ( 1) 00:16:48.648 3.319 - 3.342: 98.7225% ( 3) 00:16:48.648 3.342 - 3.366: 98.7374% ( 2) 00:16:48.648 3.390 - 3.413: 98.7449% ( 1) 00:16:48.648 3.413 - 3.437: 98.7523% ( 1) 00:16:48.648 3.484 - 3.508: 98.7598% ( 1) 00:16:48.648 3.532 - 3.556: 98.7673% ( 1) 00:16:48.648 3.603 - 3.627: 98.7747% ( 1) 00:16:48.648 3.627 - 3.650: 98.7897% ( 2) 00:16:48.648 3.650 - 3.674: 98.7972% ( 1) 00:16:48.648 3.674 - 3.698: 98.8046% ( 1) 00:16:48.648 3.698 - 3.721: 98.8196% ( 2) 00:16:48.648 3.721 - 3.745: 98.8270% ( 1) 00:16:48.648 3.769 - 3.793: 98.8345% ( 1) 00:16:48.648 3.793 - 3.816: 98.8420% ( 1) 00:16:48.648 3.840 - 3.864: 98.8495% ( 1) 00:16:48.648 3.864 - 3.887: 98.8569% ( 1) 00:16:48.648 4.267 - 4.290: 98.8644% ( 1) 00:16:48.648 5.144 - 5.167: 98.8719% ( 1) 00:16:48.648 5.215 - 5.239: 98.8793% ( 1) 00:16:48.648 5.239 - 5.262: 98.8868% ( 1) 00:16:48.648 5.357 - 5.381: 98.8943% ( 1) 00:16:48.648 5.665 - 5.689: 98.9018% ( 1) 00:16:48.648 5.713 - 5.736: 98.9092% ( 1) 00:16:48.648 5.760 - 5.784: 98.9167% ( 1) 00:16:48.648 5.807 - 5.831: 98.9242% ( 1) 00:16:48.648 5.855 - 5.879: 98.9316% ( 1) 00:16:48.648 5.950 - 5.973: 98.9391% ( 1) 00:16:48.648 6.044 - 6.068: 98.9466% ( 1) 00:16:48.648 6.258 - 6.305: 98.9541% ( 1) 00:16:48.648 6.305 - 6.353: 98.9615% ( 1) 00:16:48.648 6.353 - 6.400: 98.9690% ( 1) 00:16:48.648 6.542 - 6.590: 98.9765% ( 1) 00:16:48.648 6.637 - 6.684: 98.9839% ( 1) 00:16:48.648 6.874 - 6.921: 9[2024-07-25 23:22:46.220579] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:48.648 8.9914% ( 1) 00:16:48.648 7.253 - 7.301: 98.9989% ( 1) 00:16:48.648 8.249 - 8.296: 99.0064% ( 1) 00:16:48.648 8.581 - 8.628: 99.0138% ( 1) 00:16:48.648 10.809 - 10.856: 99.0213% ( 1) 00:16:48.648 15.644 - 15.739: 99.0362% ( 2) 00:16:48.648 15.739 - 15.834: 99.0437% ( 1) 00:16:48.648 15.929 - 16.024: 99.0661% ( 3) 00:16:48.648 16.024 - 16.119: 99.0811% ( 2) 00:16:48.648 16.119 - 16.213: 99.0885% ( 1) 00:16:48.648 16.213 - 16.308: 99.1109% ( 3) 00:16:48.648 16.403 - 16.498: 99.1408% ( 4) 00:16:48.648 16.498 - 16.593: 99.1857% ( 6) 00:16:48.648 16.593 - 16.687: 99.2380% ( 7) 00:16:48.648 16.687 - 16.782: 99.2678% ( 4) 00:16:48.648 16.782 - 16.877: 99.2753% ( 1) 00:16:48.648 16.972 - 17.067: 99.2977% ( 3) 00:16:48.648 17.067 - 17.161: 99.3127% ( 2) 00:16:48.648 17.161 - 17.256: 99.3201% ( 1) 00:16:48.648 17.256 - 17.351: 99.3276% ( 1) 00:16:48.648 17.446 - 17.541: 99.3351% ( 1) 00:16:48.648 17.730 - 17.825: 99.3425% ( 1) 00:16:48.648 17.825 - 17.920: 99.3500% ( 1) 00:16:48.648 17.920 - 18.015: 99.3575% ( 1) 00:16:48.648 18.110 - 18.204: 99.3650% ( 1) 00:16:48.648 18.299 - 18.394: 99.3799% ( 2) 00:16:48.648 18.963 - 19.058: 99.3874% ( 1) 00:16:48.648 20.196 - 20.290: 99.3948% ( 1) 00:16:48.648 3980.705 - 4004.978: 99.8879% ( 66) 00:16:48.648 4004.978 - 4029.250: 99.9925% ( 14) 00:16:48.648 5000.154 - 5024.427: 100.0000% ( 1) 00:16:48.648 00:16:48.648 23:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:48.648 23:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:48.648 23:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:48.648 23:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:48.648 23:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:48.907 [ 00:16:48.907 { 00:16:48.907 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:48.907 "subtype": "Discovery", 00:16:48.907 "listen_addresses": [], 00:16:48.907 "allow_any_host": true, 00:16:48.907 "hosts": [] 00:16:48.907 }, 00:16:48.907 { 00:16:48.907 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:48.907 "subtype": "NVMe", 00:16:48.907 "listen_addresses": [ 00:16:48.907 { 00:16:48.907 "trtype": "VFIOUSER", 00:16:48.907 "adrfam": "IPv4", 00:16:48.907 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:48.907 "trsvcid": "0" 00:16:48.907 } 00:16:48.907 ], 00:16:48.907 "allow_any_host": true, 00:16:48.907 "hosts": [], 00:16:48.907 "serial_number": "SPDK1", 00:16:48.907 "model_number": "SPDK bdev Controller", 00:16:48.907 "max_namespaces": 32, 00:16:48.907 "min_cntlid": 1, 00:16:48.907 "max_cntlid": 65519, 00:16:48.907 "namespaces": [ 00:16:48.907 { 00:16:48.907 "nsid": 1, 00:16:48.907 "bdev_name": "Malloc1", 00:16:48.907 "name": "Malloc1", 00:16:48.907 "nguid": "A1FD4CA47588441B8AAE0E0466DCB675", 00:16:48.907 "uuid": "a1fd4ca4-7588-441b-8aae-0e0466dcb675" 00:16:48.907 } 00:16:48.907 ] 00:16:48.907 }, 00:16:48.907 { 00:16:48.907 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:48.907 "subtype": "NVMe", 00:16:48.907 "listen_addresses": [ 00:16:48.907 { 00:16:48.907 "trtype": "VFIOUSER", 00:16:48.907 "adrfam": "IPv4", 00:16:48.907 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:48.907 "trsvcid": "0" 00:16:48.907 } 00:16:48.907 ], 00:16:48.907 "allow_any_host": true, 00:16:48.907 "hosts": [], 00:16:48.907 "serial_number": "SPDK2", 00:16:48.907 "model_number": "SPDK bdev Controller", 00:16:48.907 "max_namespaces": 32, 00:16:48.907 "min_cntlid": 1, 00:16:48.907 "max_cntlid": 65519, 00:16:48.907 "namespaces": [ 00:16:48.907 { 00:16:48.907 "nsid": 1, 00:16:48.907 "bdev_name": "Malloc2", 00:16:48.907 "name": "Malloc2", 00:16:48.907 "nguid": "51F208BB91704DC0B79A608A65F1482F", 00:16:48.907 "uuid": "51f208bb-9170-4dc0-b79a-608a65f1482f" 00:16:48.907 } 00:16:48.907 ] 00:16:48.907 } 00:16:48.907 ] 00:16:48.907 23:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:48.907 23:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1377781 00:16:48.907 23:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:48.907 23:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:48.907 23:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:48.907 23:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:48.907 23:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:48.907 23:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:48.907 23:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:48.907 23:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:48.907 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.166 [2024-07-25 23:22:46.677670] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:49.166 Malloc3 00:16:49.166 23:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:49.424 [2024-07-25 23:22:47.026331] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:49.424 23:22:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:49.424 Asynchronous Event Request test 00:16:49.424 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:49.424 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:49.424 Registering asynchronous event callbacks... 00:16:49.424 Starting namespace attribute notice tests for all controllers... 00:16:49.424 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:49.424 aer_cb - Changed Namespace 00:16:49.424 Cleaning up... 00:16:49.682 [ 00:16:49.682 { 00:16:49.682 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:49.682 "subtype": "Discovery", 00:16:49.682 "listen_addresses": [], 00:16:49.682 "allow_any_host": true, 00:16:49.682 "hosts": [] 00:16:49.682 }, 00:16:49.682 { 00:16:49.682 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:49.682 "subtype": "NVMe", 00:16:49.682 "listen_addresses": [ 00:16:49.682 { 00:16:49.682 "trtype": "VFIOUSER", 00:16:49.682 "adrfam": "IPv4", 00:16:49.682 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:49.682 "trsvcid": "0" 00:16:49.682 } 00:16:49.682 ], 00:16:49.682 "allow_any_host": true, 00:16:49.682 "hosts": [], 00:16:49.682 "serial_number": "SPDK1", 00:16:49.682 "model_number": "SPDK bdev Controller", 00:16:49.682 "max_namespaces": 32, 00:16:49.682 "min_cntlid": 1, 00:16:49.682 "max_cntlid": 65519, 00:16:49.682 "namespaces": [ 00:16:49.682 { 00:16:49.682 "nsid": 1, 00:16:49.682 "bdev_name": "Malloc1", 00:16:49.682 "name": "Malloc1", 00:16:49.682 "nguid": "A1FD4CA47588441B8AAE0E0466DCB675", 00:16:49.682 "uuid": "a1fd4ca4-7588-441b-8aae-0e0466dcb675" 00:16:49.682 }, 00:16:49.682 { 00:16:49.682 "nsid": 2, 00:16:49.682 "bdev_name": "Malloc3", 00:16:49.682 "name": "Malloc3", 00:16:49.682 "nguid": "9D774A2A2DD04163ACDFD4475E61B420", 00:16:49.683 "uuid": "9d774a2a-2dd0-4163-acdf-d4475e61b420" 00:16:49.683 } 00:16:49.683 ] 00:16:49.683 }, 00:16:49.683 { 00:16:49.683 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:49.683 "subtype": "NVMe", 00:16:49.683 "listen_addresses": [ 00:16:49.683 { 00:16:49.683 "trtype": "VFIOUSER", 00:16:49.683 "adrfam": "IPv4", 00:16:49.683 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:49.683 "trsvcid": "0" 00:16:49.683 } 00:16:49.683 ], 00:16:49.683 "allow_any_host": true, 00:16:49.683 "hosts": [], 00:16:49.683 "serial_number": "SPDK2", 00:16:49.683 "model_number": "SPDK bdev Controller", 00:16:49.683 "max_namespaces": 32, 00:16:49.683 "min_cntlid": 1, 00:16:49.683 "max_cntlid": 65519, 00:16:49.683 "namespaces": [ 00:16:49.683 { 00:16:49.683 "nsid": 1, 00:16:49.683 "bdev_name": "Malloc2", 00:16:49.683 "name": "Malloc2", 00:16:49.683 "nguid": "51F208BB91704DC0B79A608A65F1482F", 00:16:49.683 "uuid": "51f208bb-9170-4dc0-b79a-608a65f1482f" 00:16:49.683 } 00:16:49.683 ] 00:16:49.683 } 00:16:49.683 ] 00:16:49.683 23:22:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1377781 00:16:49.683 23:22:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:49.683 23:22:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:49.683 23:22:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:49.683 23:22:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:49.683 [2024-07-25 23:22:47.324773] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:16:49.683 [2024-07-25 23:22:47.324815] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377910 ] 00:16:49.683 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.683 [2024-07-25 23:22:47.342646] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:49.683 [2024-07-25 23:22:47.360199] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:49.683 [2024-07-25 23:22:47.369276] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:49.683 [2024-07-25 23:22:47.369316] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe7520fd000 00:16:49.683 [2024-07-25 23:22:47.370272] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:49.683 [2024-07-25 23:22:47.371277] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:49.683 [2024-07-25 23:22:47.372282] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:49.683 [2024-07-25 23:22:47.373284] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:49.683 [2024-07-25 23:22:47.374293] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:49.683 [2024-07-25 23:22:47.375304] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:49.683 [2024-07-25 23:22:47.376314] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:49.683 [2024-07-25 23:22:47.377319] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:49.683 [2024-07-25 23:22:47.378325] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:49.683 [2024-07-25 23:22:47.378347] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe750ebf000 00:16:49.683 [2024-07-25 23:22:47.379472] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:49.683 [2024-07-25 23:22:47.398224] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:49.683 [2024-07-25 23:22:47.398259] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:49.683 [2024-07-25 23:22:47.400351] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:49.683 [2024-07-25 23:22:47.400420] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:49.683 [2024-07-25 23:22:47.400524] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:49.683 [2024-07-25 23:22:47.400547] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:49.683 [2024-07-25 23:22:47.400558] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:49.683 [2024-07-25 23:22:47.401355] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:49.683 [2024-07-25 23:22:47.401394] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:49.683 [2024-07-25 23:22:47.401409] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:49.683 [2024-07-25 23:22:47.402363] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:49.683 [2024-07-25 23:22:47.402397] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:49.683 [2024-07-25 23:22:47.402411] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:49.683 [2024-07-25 23:22:47.403369] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:49.683 [2024-07-25 23:22:47.403391] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:49.683 [2024-07-25 23:22:47.404375] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:49.683 [2024-07-25 23:22:47.404411] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:49.683 [2024-07-25 23:22:47.404420] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:49.683 [2024-07-25 23:22:47.404432] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:49.683 [2024-07-25 23:22:47.404542] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:49.683 [2024-07-25 23:22:47.404551] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:49.683 [2024-07-25 23:22:47.404575] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:49.683 [2024-07-25 23:22:47.405387] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:49.683 [2024-07-25 23:22:47.406394] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:49.942 [2024-07-25 23:22:47.407405] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:49.943 [2024-07-25 23:22:47.408408] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:49.943 [2024-07-25 23:22:47.408475] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:49.943 [2024-07-25 23:22:47.409423] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:49.943 [2024-07-25 23:22:47.409442] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:49.943 [2024-07-25 23:22:47.409452] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.409476] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:49.943 [2024-07-25 23:22:47.409489] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.409512] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:49.943 [2024-07-25 23:22:47.409522] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:49.943 [2024-07-25 23:22:47.409529] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:49.943 [2024-07-25 23:22:47.409548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:49.943 [2024-07-25 23:22:47.416075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:49.943 [2024-07-25 23:22:47.416099] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:49.943 [2024-07-25 23:22:47.416112] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:49.943 [2024-07-25 23:22:47.416121] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:49.943 [2024-07-25 23:22:47.416129] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:49.943 [2024-07-25 23:22:47.416137] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:49.943 [2024-07-25 23:22:47.416145] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:49.943 [2024-07-25 23:22:47.416154] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.416168] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.416188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:49.943 [2024-07-25 23:22:47.424071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:49.943 [2024-07-25 23:22:47.424099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:49.943 [2024-07-25 23:22:47.424114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:49.943 [2024-07-25 23:22:47.424127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:49.943 [2024-07-25 23:22:47.424139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:49.943 [2024-07-25 23:22:47.424148] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.424164] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.424179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:49.943 [2024-07-25 23:22:47.432085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:49.943 [2024-07-25 23:22:47.432103] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:49.943 [2024-07-25 23:22:47.432113] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.432129] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.432141] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.432155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:49.943 [2024-07-25 23:22:47.440083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:49.943 [2024-07-25 23:22:47.440159] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.440177] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.440195] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:49.943 [2024-07-25 23:22:47.440204] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:49.943 [2024-07-25 23:22:47.440211] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:49.943 [2024-07-25 23:22:47.440221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:49.943 [2024-07-25 23:22:47.448083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:49.943 [2024-07-25 23:22:47.448116] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:49.943 [2024-07-25 23:22:47.448133] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.448149] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.448162] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:49.943 [2024-07-25 23:22:47.448170] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:49.943 [2024-07-25 23:22:47.448177] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:49.943 [2024-07-25 23:22:47.448187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:49.943 [2024-07-25 23:22:47.456070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:49.943 [2024-07-25 23:22:47.456125] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.456143] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.456157] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:49.943 [2024-07-25 23:22:47.456166] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:49.943 [2024-07-25 23:22:47.456172] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:49.943 [2024-07-25 23:22:47.456183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:49.943 [2024-07-25 23:22:47.464069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:49.943 [2024-07-25 23:22:47.464091] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.464119] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.464135] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.464151] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.464161] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.464170] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:49.943 [2024-07-25 23:22:47.464184] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:49.944 [2024-07-25 23:22:47.464193] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:49.944 [2024-07-25 23:22:47.464202] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:49.944 [2024-07-25 23:22:47.464229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:49.944 [2024-07-25 23:22:47.472072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:49.944 [2024-07-25 23:22:47.472119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:49.944 [2024-07-25 23:22:47.480068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:49.944 [2024-07-25 23:22:47.480093] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:49.944 [2024-07-25 23:22:47.488074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:49.944 [2024-07-25 23:22:47.488099] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:49.944 [2024-07-25 23:22:47.496069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:49.944 [2024-07-25 23:22:47.496119] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:49.944 [2024-07-25 23:22:47.496131] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:49.944 [2024-07-25 23:22:47.496137] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:49.944 [2024-07-25 23:22:47.496144] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:49.944 [2024-07-25 23:22:47.496150] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:49.944 [2024-07-25 23:22:47.496160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:49.944 [2024-07-25 23:22:47.496173] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:49.944 [2024-07-25 23:22:47.496181] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:49.944 [2024-07-25 23:22:47.496188] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:49.944 [2024-07-25 23:22:47.496196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:49.944 [2024-07-25 23:22:47.496208] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:49.944 [2024-07-25 23:22:47.496216] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:49.944 [2024-07-25 23:22:47.496222] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:49.944 [2024-07-25 23:22:47.496231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:49.944 [2024-07-25 23:22:47.496244] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:49.944 [2024-07-25 23:22:47.496252] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:49.944 [2024-07-25 23:22:47.496258] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:49.944 [2024-07-25 23:22:47.496267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:49.944 [2024-07-25 23:22:47.504072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:49.944 [2024-07-25 23:22:47.504109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:49.944 [2024-07-25 23:22:47.504127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:49.944 [2024-07-25 23:22:47.504140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:49.944 ===================================================== 00:16:49.944 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:49.944 ===================================================== 00:16:49.944 Controller Capabilities/Features 00:16:49.944 ================================ 00:16:49.944 Vendor ID: 4e58 00:16:49.944 Subsystem Vendor ID: 4e58 00:16:49.944 Serial Number: SPDK2 00:16:49.944 Model Number: SPDK bdev Controller 00:16:49.944 Firmware Version: 24.09 00:16:49.944 Recommended Arb Burst: 6 00:16:49.944 IEEE OUI Identifier: 8d 6b 50 00:16:49.944 Multi-path I/O 00:16:49.944 May have multiple subsystem ports: Yes 00:16:49.944 May have multiple controllers: Yes 00:16:49.944 Associated with SR-IOV VF: No 00:16:49.944 Max Data Transfer Size: 131072 00:16:49.944 Max Number of Namespaces: 32 00:16:49.944 Max Number of I/O Queues: 127 00:16:49.944 NVMe Specification Version (VS): 1.3 00:16:49.944 NVMe Specification Version (Identify): 1.3 00:16:49.944 Maximum Queue Entries: 256 00:16:49.944 Contiguous Queues Required: Yes 00:16:49.944 Arbitration Mechanisms Supported 00:16:49.944 Weighted Round Robin: Not Supported 00:16:49.944 Vendor Specific: Not Supported 00:16:49.944 Reset Timeout: 15000 ms 00:16:49.944 Doorbell Stride: 4 bytes 00:16:49.944 NVM Subsystem Reset: Not Supported 00:16:49.944 Command Sets Supported 00:16:49.944 NVM Command Set: Supported 00:16:49.944 Boot Partition: Not Supported 00:16:49.944 Memory Page Size Minimum: 4096 bytes 00:16:49.944 Memory Page Size Maximum: 4096 bytes 00:16:49.944 Persistent Memory Region: Not Supported 00:16:49.944 Optional Asynchronous Events Supported 00:16:49.944 Namespace Attribute Notices: Supported 00:16:49.944 Firmware Activation Notices: Not Supported 00:16:49.944 ANA Change Notices: Not Supported 00:16:49.944 PLE Aggregate Log Change Notices: Not Supported 00:16:49.944 LBA Status Info Alert Notices: Not Supported 00:16:49.944 EGE Aggregate Log Change Notices: Not Supported 00:16:49.944 Normal NVM Subsystem Shutdown event: Not Supported 00:16:49.944 Zone Descriptor Change Notices: Not Supported 00:16:49.944 Discovery Log Change Notices: Not Supported 00:16:49.944 Controller Attributes 00:16:49.944 128-bit Host Identifier: Supported 00:16:49.944 Non-Operational Permissive Mode: Not Supported 00:16:49.944 NVM Sets: Not Supported 00:16:49.944 Read Recovery Levels: Not Supported 00:16:49.944 Endurance Groups: Not Supported 00:16:49.944 Predictable Latency Mode: Not Supported 00:16:49.944 Traffic Based Keep ALive: Not Supported 00:16:49.944 Namespace Granularity: Not Supported 00:16:49.944 SQ Associations: Not Supported 00:16:49.944 UUID List: Not Supported 00:16:49.944 Multi-Domain Subsystem: Not Supported 00:16:49.944 Fixed Capacity Management: Not Supported 00:16:49.944 Variable Capacity Management: Not Supported 00:16:49.944 Delete Endurance Group: Not Supported 00:16:49.944 Delete NVM Set: Not Supported 00:16:49.944 Extended LBA Formats Supported: Not Supported 00:16:49.944 Flexible Data Placement Supported: Not Supported 00:16:49.944 00:16:49.944 Controller Memory Buffer Support 00:16:49.944 ================================ 00:16:49.944 Supported: No 00:16:49.944 00:16:49.944 Persistent Memory Region Support 00:16:49.944 ================================ 00:16:49.944 Supported: No 00:16:49.944 00:16:49.944 Admin Command Set Attributes 00:16:49.944 ============================ 00:16:49.944 Security Send/Receive: Not Supported 00:16:49.944 Format NVM: Not Supported 00:16:49.944 Firmware Activate/Download: Not Supported 00:16:49.944 Namespace Management: Not Supported 00:16:49.944 Device Self-Test: Not Supported 00:16:49.944 Directives: Not Supported 00:16:49.944 NVMe-MI: Not Supported 00:16:49.944 Virtualization Management: Not Supported 00:16:49.944 Doorbell Buffer Config: Not Supported 00:16:49.944 Get LBA Status Capability: Not Supported 00:16:49.944 Command & Feature Lockdown Capability: Not Supported 00:16:49.944 Abort Command Limit: 4 00:16:49.944 Async Event Request Limit: 4 00:16:49.944 Number of Firmware Slots: N/A 00:16:49.944 Firmware Slot 1 Read-Only: N/A 00:16:49.944 Firmware Activation Without Reset: N/A 00:16:49.945 Multiple Update Detection Support: N/A 00:16:49.945 Firmware Update Granularity: No Information Provided 00:16:49.945 Per-Namespace SMART Log: No 00:16:49.945 Asymmetric Namespace Access Log Page: Not Supported 00:16:49.945 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:49.945 Command Effects Log Page: Supported 00:16:49.945 Get Log Page Extended Data: Supported 00:16:49.945 Telemetry Log Pages: Not Supported 00:16:49.945 Persistent Event Log Pages: Not Supported 00:16:49.945 Supported Log Pages Log Page: May Support 00:16:49.945 Commands Supported & Effects Log Page: Not Supported 00:16:49.945 Feature Identifiers & Effects Log Page:May Support 00:16:49.945 NVMe-MI Commands & Effects Log Page: May Support 00:16:49.945 Data Area 4 for Telemetry Log: Not Supported 00:16:49.945 Error Log Page Entries Supported: 128 00:16:49.945 Keep Alive: Supported 00:16:49.945 Keep Alive Granularity: 10000 ms 00:16:49.945 00:16:49.945 NVM Command Set Attributes 00:16:49.945 ========================== 00:16:49.945 Submission Queue Entry Size 00:16:49.945 Max: 64 00:16:49.945 Min: 64 00:16:49.945 Completion Queue Entry Size 00:16:49.945 Max: 16 00:16:49.945 Min: 16 00:16:49.945 Number of Namespaces: 32 00:16:49.945 Compare Command: Supported 00:16:49.945 Write Uncorrectable Command: Not Supported 00:16:49.945 Dataset Management Command: Supported 00:16:49.945 Write Zeroes Command: Supported 00:16:49.945 Set Features Save Field: Not Supported 00:16:49.945 Reservations: Not Supported 00:16:49.945 Timestamp: Not Supported 00:16:49.945 Copy: Supported 00:16:49.945 Volatile Write Cache: Present 00:16:49.945 Atomic Write Unit (Normal): 1 00:16:49.945 Atomic Write Unit (PFail): 1 00:16:49.945 Atomic Compare & Write Unit: 1 00:16:49.945 Fused Compare & Write: Supported 00:16:49.945 Scatter-Gather List 00:16:49.945 SGL Command Set: Supported (Dword aligned) 00:16:49.945 SGL Keyed: Not Supported 00:16:49.945 SGL Bit Bucket Descriptor: Not Supported 00:16:49.945 SGL Metadata Pointer: Not Supported 00:16:49.945 Oversized SGL: Not Supported 00:16:49.945 SGL Metadata Address: Not Supported 00:16:49.945 SGL Offset: Not Supported 00:16:49.945 Transport SGL Data Block: Not Supported 00:16:49.945 Replay Protected Memory Block: Not Supported 00:16:49.945 00:16:49.945 Firmware Slot Information 00:16:49.945 ========================= 00:16:49.945 Active slot: 1 00:16:49.945 Slot 1 Firmware Revision: 24.09 00:16:49.945 00:16:49.945 00:16:49.945 Commands Supported and Effects 00:16:49.945 ============================== 00:16:49.945 Admin Commands 00:16:49.945 -------------- 00:16:49.945 Get Log Page (02h): Supported 00:16:49.945 Identify (06h): Supported 00:16:49.945 Abort (08h): Supported 00:16:49.945 Set Features (09h): Supported 00:16:49.945 Get Features (0Ah): Supported 00:16:49.945 Asynchronous Event Request (0Ch): Supported 00:16:49.945 Keep Alive (18h): Supported 00:16:49.945 I/O Commands 00:16:49.945 ------------ 00:16:49.945 Flush (00h): Supported LBA-Change 00:16:49.945 Write (01h): Supported LBA-Change 00:16:49.945 Read (02h): Supported 00:16:49.945 Compare (05h): Supported 00:16:49.945 Write Zeroes (08h): Supported LBA-Change 00:16:49.945 Dataset Management (09h): Supported LBA-Change 00:16:49.945 Copy (19h): Supported LBA-Change 00:16:49.945 00:16:49.945 Error Log 00:16:49.945 ========= 00:16:49.945 00:16:49.945 Arbitration 00:16:49.945 =========== 00:16:49.945 Arbitration Burst: 1 00:16:49.945 00:16:49.945 Power Management 00:16:49.945 ================ 00:16:49.945 Number of Power States: 1 00:16:49.945 Current Power State: Power State #0 00:16:49.945 Power State #0: 00:16:49.945 Max Power: 0.00 W 00:16:49.945 Non-Operational State: Operational 00:16:49.945 Entry Latency: Not Reported 00:16:49.945 Exit Latency: Not Reported 00:16:49.945 Relative Read Throughput: 0 00:16:49.945 Relative Read Latency: 0 00:16:49.945 Relative Write Throughput: 0 00:16:49.945 Relative Write Latency: 0 00:16:49.945 Idle Power: Not Reported 00:16:49.945 Active Power: Not Reported 00:16:49.945 Non-Operational Permissive Mode: Not Supported 00:16:49.945 00:16:49.945 Health Information 00:16:49.945 ================== 00:16:49.945 Critical Warnings: 00:16:49.945 Available Spare Space: OK 00:16:49.945 Temperature: OK 00:16:49.945 Device Reliability: OK 00:16:49.945 Read Only: No 00:16:49.945 Volatile Memory Backup: OK 00:16:49.945 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:49.945 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:49.945 Available Spare: 0% 00:16:49.945 Available Sp[2024-07-25 23:22:47.504254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:49.945 [2024-07-25 23:22:47.512070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:49.945 [2024-07-25 23:22:47.512129] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:49.945 [2024-07-25 23:22:47.512147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:49.945 [2024-07-25 23:22:47.512158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:49.945 [2024-07-25 23:22:47.512168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:49.945 [2024-07-25 23:22:47.512178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:49.945 [2024-07-25 23:22:47.512262] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:49.945 [2024-07-25 23:22:47.512284] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:49.945 [2024-07-25 23:22:47.513265] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:49.945 [2024-07-25 23:22:47.513338] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:49.945 [2024-07-25 23:22:47.513353] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:49.945 [2024-07-25 23:22:47.514268] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:49.945 [2024-07-25 23:22:47.514291] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:49.945 [2024-07-25 23:22:47.514339] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:49.945 [2024-07-25 23:22:47.515479] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:49.945 are Threshold: 0% 00:16:49.945 Life Percentage Used: 0% 00:16:49.945 Data Units Read: 0 00:16:49.945 Data Units Written: 0 00:16:49.945 Host Read Commands: 0 00:16:49.945 Host Write Commands: 0 00:16:49.945 Controller Busy Time: 0 minutes 00:16:49.945 Power Cycles: 0 00:16:49.945 Power On Hours: 0 hours 00:16:49.945 Unsafe Shutdowns: 0 00:16:49.945 Unrecoverable Media Errors: 0 00:16:49.945 Lifetime Error Log Entries: 0 00:16:49.945 Warning Temperature Time: 0 minutes 00:16:49.945 Critical Temperature Time: 0 minutes 00:16:49.945 00:16:49.945 Number of Queues 00:16:49.945 ================ 00:16:49.945 Number of I/O Submission Queues: 127 00:16:49.945 Number of I/O Completion Queues: 127 00:16:49.945 00:16:49.945 Active Namespaces 00:16:49.945 ================= 00:16:49.945 Namespace ID:1 00:16:49.945 Error Recovery Timeout: Unlimited 00:16:49.945 Command Set Identifier: NVM (00h) 00:16:49.945 Deallocate: Supported 00:16:49.945 Deallocated/Unwritten Error: Not Supported 00:16:49.945 Deallocated Read Value: Unknown 00:16:49.945 Deallocate in Write Zeroes: Not Supported 00:16:49.945 Deallocated Guard Field: 0xFFFF 00:16:49.946 Flush: Supported 00:16:49.946 Reservation: Supported 00:16:49.946 Namespace Sharing Capabilities: Multiple Controllers 00:16:49.946 Size (in LBAs): 131072 (0GiB) 00:16:49.946 Capacity (in LBAs): 131072 (0GiB) 00:16:49.946 Utilization (in LBAs): 131072 (0GiB) 00:16:49.946 NGUID: 51F208BB91704DC0B79A608A65F1482F 00:16:49.946 UUID: 51f208bb-9170-4dc0-b79a-608a65f1482f 00:16:49.946 Thin Provisioning: Not Supported 00:16:49.946 Per-NS Atomic Units: Yes 00:16:49.946 Atomic Boundary Size (Normal): 0 00:16:49.946 Atomic Boundary Size (PFail): 0 00:16:49.946 Atomic Boundary Offset: 0 00:16:49.946 Maximum Single Source Range Length: 65535 00:16:49.946 Maximum Copy Length: 65535 00:16:49.946 Maximum Source Range Count: 1 00:16:49.946 NGUID/EUI64 Never Reused: No 00:16:49.946 Namespace Write Protected: No 00:16:49.946 Number of LBA Formats: 1 00:16:49.946 Current LBA Format: LBA Format #00 00:16:49.946 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:49.946 00:16:49.946 23:22:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:49.946 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.204 [2024-07-25 23:22:47.742843] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:55.477 Initializing NVMe Controllers 00:16:55.477 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:55.477 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:55.477 Initialization complete. Launching workers. 00:16:55.477 ======================================================== 00:16:55.477 Latency(us) 00:16:55.477 Device Information : IOPS MiB/s Average min max 00:16:55.477 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34498.17 134.76 3709.76 1179.51 11659.76 00:16:55.477 ======================================================== 00:16:55.477 Total : 34498.17 134.76 3709.76 1179.51 11659.76 00:16:55.477 00:16:55.477 [2024-07-25 23:22:52.859422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:55.477 23:22:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:55.477 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.477 [2024-07-25 23:22:53.091266] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:00.779 Initializing NVMe Controllers 00:17:00.779 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:00.779 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:00.779 Initialization complete. Launching workers. 00:17:00.779 ======================================================== 00:17:00.779 Latency(us) 00:17:00.779 Device Information : IOPS MiB/s Average min max 00:17:00.779 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32401.85 126.57 3949.74 1197.52 9270.62 00:17:00.779 ======================================================== 00:17:00.779 Total : 32401.85 126.57 3949.74 1197.52 9270.62 00:17:00.779 00:17:00.779 [2024-07-25 23:22:58.114376] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:00.779 23:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:00.779 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.779 [2024-07-25 23:22:58.323192] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:06.053 [2024-07-25 23:23:03.460208] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:06.053 Initializing NVMe Controllers 00:17:06.053 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:06.053 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:06.053 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:06.053 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:06.053 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:06.053 Initialization complete. Launching workers. 00:17:06.053 Starting thread on core 2 00:17:06.053 Starting thread on core 3 00:17:06.053 Starting thread on core 1 00:17:06.053 23:23:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:06.053 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.053 [2024-07-25 23:23:03.759768] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:09.349 [2024-07-25 23:23:06.851329] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:09.349 Initializing NVMe Controllers 00:17:09.349 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:09.349 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:09.349 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:09.349 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:09.349 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:09.349 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:09.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:09.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:09.349 Initialization complete. Launching workers. 00:17:09.349 Starting thread on core 1 with urgent priority queue 00:17:09.349 Starting thread on core 2 with urgent priority queue 00:17:09.349 Starting thread on core 3 with urgent priority queue 00:17:09.349 Starting thread on core 0 with urgent priority queue 00:17:09.349 SPDK bdev Controller (SPDK2 ) core 0: 6258.00 IO/s 15.98 secs/100000 ios 00:17:09.349 SPDK bdev Controller (SPDK2 ) core 1: 5565.33 IO/s 17.97 secs/100000 ios 00:17:09.349 SPDK bdev Controller (SPDK2 ) core 2: 5804.67 IO/s 17.23 secs/100000 ios 00:17:09.349 SPDK bdev Controller (SPDK2 ) core 3: 5873.67 IO/s 17.03 secs/100000 ios 00:17:09.349 ======================================================== 00:17:09.349 00:17:09.349 23:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:09.349 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.608 [2024-07-25 23:23:07.151551] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:09.608 Initializing NVMe Controllers 00:17:09.608 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:09.608 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:09.608 Namespace ID: 1 size: 0GB 00:17:09.608 Initialization complete. 00:17:09.608 INFO: using host memory buffer for IO 00:17:09.608 Hello world! 00:17:09.608 [2024-07-25 23:23:07.160619] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:09.608 23:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:09.608 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.867 [2024-07-25 23:23:07.437814] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:10.807 Initializing NVMe Controllers 00:17:10.807 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:10.807 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:10.807 Initialization complete. Launching workers. 00:17:10.807 submit (in ns) avg, min, max = 8069.2, 3561.1, 4015870.0 00:17:10.807 complete (in ns) avg, min, max = 26265.8, 2062.2, 4015208.9 00:17:10.807 00:17:10.807 Submit histogram 00:17:10.807 ================ 00:17:10.807 Range in us Cumulative Count 00:17:10.807 3.556 - 3.579: 0.3303% ( 45) 00:17:10.807 3.579 - 3.603: 1.1083% ( 106) 00:17:10.807 3.603 - 3.627: 3.4278% ( 316) 00:17:10.807 3.627 - 3.650: 8.3456% ( 670) 00:17:10.807 3.650 - 3.674: 16.6765% ( 1135) 00:17:10.807 3.674 - 3.698: 26.7543% ( 1373) 00:17:10.807 3.698 - 3.721: 37.2651% ( 1432) 00:17:10.807 3.721 - 3.745: 46.0291% ( 1194) 00:17:10.807 3.745 - 3.769: 52.7745% ( 919) 00:17:10.807 3.769 - 3.793: 58.8520% ( 828) 00:17:10.807 3.793 - 3.816: 63.1973% ( 592) 00:17:10.807 3.816 - 3.840: 67.9169% ( 643) 00:17:10.807 3.840 - 3.864: 71.4401% ( 480) 00:17:10.807 3.864 - 3.887: 74.8752% ( 468) 00:17:10.807 3.887 - 3.911: 77.9360% ( 417) 00:17:10.807 3.911 - 3.935: 81.3344% ( 463) 00:17:10.807 3.935 - 3.959: 84.3878% ( 416) 00:17:10.807 3.959 - 3.982: 87.1403% ( 375) 00:17:10.807 3.982 - 4.006: 89.1588% ( 275) 00:17:10.807 4.006 - 4.030: 90.6782% ( 207) 00:17:10.807 4.030 - 4.053: 92.1976% ( 207) 00:17:10.807 4.053 - 4.077: 93.5041% ( 178) 00:17:10.807 4.077 - 4.101: 94.6858% ( 161) 00:17:10.807 4.101 - 4.124: 95.5153% ( 113) 00:17:10.807 4.124 - 4.148: 96.1538% ( 87) 00:17:10.807 4.148 - 4.172: 96.5355% ( 52) 00:17:10.807 4.172 - 4.196: 96.7631% ( 31) 00:17:10.807 4.196 - 4.219: 96.8658% ( 14) 00:17:10.807 4.219 - 4.243: 96.9906% ( 17) 00:17:10.807 4.243 - 4.267: 97.0713% ( 11) 00:17:10.808 4.267 - 4.290: 97.1594% ( 12) 00:17:10.808 4.290 - 4.314: 97.2255% ( 9) 00:17:10.808 4.314 - 4.338: 97.2842% ( 8) 00:17:10.808 4.338 - 4.361: 97.3282% ( 6) 00:17:10.808 4.361 - 4.385: 97.3503% ( 3) 00:17:10.808 4.385 - 4.409: 97.4237% ( 10) 00:17:10.808 4.409 - 4.433: 97.4604% ( 5) 00:17:10.808 4.433 - 4.456: 97.4750% ( 2) 00:17:10.808 4.456 - 4.480: 97.4824% ( 1) 00:17:10.808 4.480 - 4.504: 97.4971% ( 2) 00:17:10.808 4.527 - 4.551: 97.5117% ( 2) 00:17:10.808 4.551 - 4.575: 97.5558% ( 6) 00:17:10.808 4.575 - 4.599: 97.5631% ( 1) 00:17:10.808 4.599 - 4.622: 97.5705% ( 1) 00:17:10.808 4.622 - 4.646: 97.5778% ( 1) 00:17:10.808 4.693 - 4.717: 97.5925% ( 2) 00:17:10.808 4.741 - 4.764: 97.6072% ( 2) 00:17:10.808 4.764 - 4.788: 97.6292% ( 3) 00:17:10.808 4.788 - 4.812: 97.6585% ( 4) 00:17:10.808 4.812 - 4.836: 97.7099% ( 7) 00:17:10.808 4.836 - 4.859: 97.7613% ( 7) 00:17:10.808 4.859 - 4.883: 97.8274% ( 9) 00:17:10.808 4.883 - 4.907: 97.8787% ( 7) 00:17:10.808 4.907 - 4.930: 97.9081% ( 4) 00:17:10.808 4.930 - 4.954: 97.9448% ( 5) 00:17:10.808 4.954 - 4.978: 98.0255% ( 11) 00:17:10.808 4.978 - 5.001: 98.0622% ( 5) 00:17:10.808 5.001 - 5.025: 98.0989% ( 5) 00:17:10.808 5.025 - 5.049: 98.1283% ( 4) 00:17:10.808 5.049 - 5.073: 98.1650% ( 5) 00:17:10.808 5.073 - 5.096: 98.1797% ( 2) 00:17:10.808 5.096 - 5.120: 98.1944% ( 2) 00:17:10.808 5.120 - 5.144: 98.2090% ( 2) 00:17:10.808 5.144 - 5.167: 98.2384% ( 4) 00:17:10.808 5.167 - 5.191: 98.2531% ( 2) 00:17:10.808 5.191 - 5.215: 98.2898% ( 5) 00:17:10.808 5.215 - 5.239: 98.3045% ( 2) 00:17:10.808 5.239 - 5.262: 98.3118% ( 1) 00:17:10.808 5.262 - 5.286: 98.3191% ( 1) 00:17:10.808 5.333 - 5.357: 98.3265% ( 1) 00:17:10.808 5.357 - 5.381: 98.3412% ( 2) 00:17:10.808 5.381 - 5.404: 98.3485% ( 1) 00:17:10.808 5.404 - 5.428: 98.3558% ( 1) 00:17:10.808 5.594 - 5.618: 98.3632% ( 1) 00:17:10.808 5.618 - 5.641: 98.3705% ( 1) 00:17:10.808 5.665 - 5.689: 98.3779% ( 1) 00:17:10.808 5.713 - 5.736: 98.3852% ( 1) 00:17:10.808 5.736 - 5.760: 98.3999% ( 2) 00:17:10.808 5.760 - 5.784: 98.4146% ( 2) 00:17:10.808 5.879 - 5.902: 98.4219% ( 1) 00:17:10.808 6.163 - 6.210: 98.4366% ( 2) 00:17:10.808 6.353 - 6.400: 98.4439% ( 1) 00:17:10.808 6.400 - 6.447: 98.4513% ( 1) 00:17:10.808 6.590 - 6.637: 98.4586% ( 1) 00:17:10.808 6.732 - 6.779: 98.4659% ( 1) 00:17:10.808 6.827 - 6.874: 98.4733% ( 1) 00:17:10.808 6.921 - 6.969: 98.4880% ( 2) 00:17:10.808 6.969 - 7.016: 98.4953% ( 1) 00:17:10.808 7.016 - 7.064: 98.5173% ( 3) 00:17:10.808 7.064 - 7.111: 98.5247% ( 1) 00:17:10.808 7.159 - 7.206: 98.5393% ( 2) 00:17:10.808 7.253 - 7.301: 98.5540% ( 2) 00:17:10.808 7.301 - 7.348: 98.5614% ( 1) 00:17:10.808 7.348 - 7.396: 98.5834% ( 3) 00:17:10.808 7.443 - 7.490: 98.5907% ( 1) 00:17:10.808 7.490 - 7.538: 98.6127% ( 3) 00:17:10.808 7.680 - 7.727: 98.6201% ( 1) 00:17:10.808 7.727 - 7.775: 98.6274% ( 1) 00:17:10.808 7.822 - 7.870: 98.6421% ( 2) 00:17:10.808 7.917 - 7.964: 98.6494% ( 1) 00:17:10.808 7.964 - 8.012: 98.6568% ( 1) 00:17:10.808 8.059 - 8.107: 98.6641% ( 1) 00:17:10.808 8.107 - 8.154: 98.6715% ( 1) 00:17:10.808 8.201 - 8.249: 98.6788% ( 1) 00:17:10.808 8.249 - 8.296: 98.7008% ( 3) 00:17:10.808 8.344 - 8.391: 98.7082% ( 1) 00:17:10.808 8.439 - 8.486: 98.7228% ( 2) 00:17:10.808 8.533 - 8.581: 98.7375% ( 2) 00:17:10.808 8.581 - 8.628: 98.7522% ( 2) 00:17:10.808 8.676 - 8.723: 98.7595% ( 1) 00:17:10.808 8.723 - 8.770: 98.7669% ( 1) 00:17:10.808 8.865 - 8.913: 98.7742% ( 1) 00:17:10.808 9.007 - 9.055: 98.7816% ( 1) 00:17:10.808 9.055 - 9.102: 98.7889% ( 1) 00:17:10.808 9.150 - 9.197: 98.7962% ( 1) 00:17:10.808 9.339 - 9.387: 98.8036% ( 1) 00:17:10.808 9.529 - 9.576: 98.8109% ( 1) 00:17:10.808 9.671 - 9.719: 98.8183% ( 1) 00:17:10.808 9.766 - 9.813: 98.8256% ( 1) 00:17:10.808 9.861 - 9.908: 98.8329% ( 1) 00:17:10.808 10.430 - 10.477: 98.8403% ( 1) 00:17:10.808 10.619 - 10.667: 98.8476% ( 1) 00:17:10.808 10.856 - 10.904: 98.8623% ( 2) 00:17:10.808 11.093 - 11.141: 98.8696% ( 1) 00:17:10.808 11.188 - 11.236: 98.8770% ( 1) 00:17:10.808 11.425 - 11.473: 98.8917% ( 2) 00:17:10.808 11.473 - 11.520: 98.9063% ( 2) 00:17:10.808 11.520 - 11.567: 98.9210% ( 2) 00:17:10.808 11.567 - 11.615: 98.9284% ( 1) 00:17:10.808 11.852 - 11.899: 98.9357% ( 1) 00:17:10.808 11.947 - 11.994: 98.9430% ( 1) 00:17:10.808 12.089 - 12.136: 98.9504% ( 1) 00:17:10.808 12.136 - 12.231: 98.9577% ( 1) 00:17:10.808 12.231 - 12.326: 98.9651% ( 1) 00:17:10.808 12.326 - 12.421: 98.9797% ( 2) 00:17:10.808 12.421 - 12.516: 98.9871% ( 1) 00:17:10.808 12.516 - 12.610: 99.0091% ( 3) 00:17:10.808 12.610 - 12.705: 99.0164% ( 1) 00:17:10.808 12.895 - 12.990: 99.0238% ( 1) 00:17:10.808 12.990 - 13.084: 99.0385% ( 2) 00:17:10.808 13.179 - 13.274: 99.0458% ( 1) 00:17:10.808 13.559 - 13.653: 99.0531% ( 1) 00:17:10.808 13.653 - 13.748: 99.0605% ( 1) 00:17:10.808 13.748 - 13.843: 99.0678% ( 1) 00:17:10.808 13.843 - 13.938: 99.0752% ( 1) 00:17:10.808 13.938 - 14.033: 99.0825% ( 1) 00:17:10.808 14.127 - 14.222: 99.0898% ( 1) 00:17:10.808 14.222 - 14.317: 99.0972% ( 1) 00:17:10.808 14.886 - 14.981: 99.1045% ( 1) 00:17:10.808 17.351 - 17.446: 99.1339% ( 4) 00:17:10.808 17.446 - 17.541: 99.1486% ( 2) 00:17:10.808 17.541 - 17.636: 99.1706% ( 3) 00:17:10.808 17.636 - 17.730: 99.2440% ( 10) 00:17:10.808 17.730 - 17.825: 99.2954% ( 7) 00:17:10.808 17.825 - 17.920: 99.3394% ( 6) 00:17:10.808 17.920 - 18.015: 99.3908% ( 7) 00:17:10.808 18.015 - 18.110: 99.4055% ( 2) 00:17:10.808 18.110 - 18.204: 99.4789% ( 10) 00:17:10.808 18.204 - 18.299: 99.5376% ( 8) 00:17:10.808 18.299 - 18.394: 99.5743% ( 5) 00:17:10.808 18.394 - 18.489: 99.6477% ( 10) 00:17:10.808 18.489 - 18.584: 99.6770% ( 4) 00:17:10.808 18.584 - 18.679: 99.6991% ( 3) 00:17:10.808 18.679 - 18.773: 99.7358% ( 5) 00:17:10.808 18.773 - 18.868: 99.7651% ( 4) 00:17:10.808 18.868 - 18.963: 99.7798% ( 2) 00:17:10.808 18.963 - 19.058: 99.7945% ( 2) 00:17:10.808 19.058 - 19.153: 99.8092% ( 2) 00:17:10.808 19.153 - 19.247: 99.8165% ( 1) 00:17:10.808 19.247 - 19.342: 99.8385% ( 3) 00:17:10.808 19.342 - 19.437: 99.8605% ( 3) 00:17:10.808 19.437 - 19.532: 99.8679% ( 1) 00:17:10.808 20.196 - 20.290: 99.8752% ( 1) 00:17:10.808 20.575 - 20.670: 99.8826% ( 1) 00:17:10.808 21.902 - 21.997: 99.8899% ( 1) 00:17:10.808 22.945 - 23.040: 99.8972% ( 1) 00:17:10.808 3980.705 - 4004.978: 99.9927% ( 13) 00:17:10.808 4004.978 - 4029.250: 100.0000% ( 1) 00:17:10.808 00:17:10.808 Complete histogram 00:17:10.808 ================== 00:17:10.808 Range in us Cumulative Count 00:17:10.808 2.062 - 2.074: 4.0003% ( 545) 00:17:10.808 2.074 - 2.086: 44.5757% ( 5528) 00:17:10.808 2.086 - 2.098: 53.0828% ( 1159) 00:17:10.808 2.098 - 2.110: 56.1656% ( 420) 00:17:10.808 2.110 - 2.121: 62.9037% ( 918) 00:17:10.808 2.121 - 2.133: 64.6653% ( 240) 00:17:10.808 2.133 - 2.145: 70.4712% ( 791) 00:17:10.808 2.145 - 2.157: 82.3106% ( 1613) 00:17:10.808 2.157 - 2.169: 83.8447% ( 209) 00:17:10.809 2.169 - 2.181: 86.6853% ( 387) 00:17:10.809 2.181 - 2.193: 90.1424% ( 471) 00:17:10.809 2.193 - 2.204: 91.0893% ( 129) 00:17:10.809 2.204 - 2.216: 91.9113% ( 112) 00:17:10.809 2.216 - 2.228: 93.9518% ( 278) 00:17:10.809 2.228 - 2.240: 95.3464% ( 190) 00:17:10.809 2.240 - 2.252: 95.6474% ( 41) 00:17:10.809 2.252 - 2.264: 95.8382% ( 26) 00:17:10.809 2.264 - 2.276: 95.8823% ( 6) 00:17:10.809 2.276 - 2.287: 95.9703% ( 12) 00:17:10.809 2.287 - 2.299: 96.1171% ( 20) 00:17:10.809 2.299 - 2.311: 96.2933% ( 24) 00:17:10.809 2.311 - 2.323: 96.3520% ( 8) 00:17:10.809 2.323 - 2.335: 96.3887% ( 5) 00:17:10.809 2.335 - 2.347: 96.4988% ( 15) 00:17:10.809 2.347 - 2.359: 96.7410% ( 33) 00:17:10.809 2.359 - 2.370: 97.1154% ( 51) 00:17:10.809 2.370 - 2.382: 97.5338% ( 57) 00:17:10.809 2.382 - 2.394: 97.8714% ( 46) 00:17:10.809 2.394 - 2.406: 98.0916% ( 30) 00:17:10.809 2.406 - 2.418: 98.2678% ( 24) 00:17:10.809 2.418 - 2.430: 98.3999% ( 18) 00:17:10.809 2.430 - 2.441: 98.4880% ( 12) 00:17:10.809 2.441 - 2.453: 98.5247% ( 5) 00:17:10.809 2.453 - 2.465: 98.5467% ( 3) 00:17:10.809 2.465 - 2.477: 98.5834% ( 5) 00:17:10.809 2.477 - 2.489: 98.5907% ( 1) 00:17:10.809 2.489 - 2.501: 98.6054% ( 2) 00:17:10.809 2.501 - 2.513: 98.6127% ( 1) 00:17:10.809 2.560 - 2.572: 98.6201% ( 1) 00:17:10.809 2.572 - 2.584: 98.6274% ( 1) 00:17:10.809 2.607 - 2.619: 98.6348% ( 1) 00:17:10.809 2.619 - 2.631: 98.6421% ( 1) 00:17:10.809 2.655 - 2.667: 98.6494% ( 1) 00:17:10.809 2.714 - 2.726: 98.6568% ( 1) 00:17:10.809 2.821 - 2.833: 98.6641% ( 1) 00:17:10.809 3.271 - 3.295: 98.6715% ( 1) 00:17:10.809 3.342 - 3.366: 98.6861% ( 2) 00:17:10.809 3.366 - 3.390: 98.6935% ( 1) 00:17:10.809 3.413 - 3.437: 98.7082% ( 2) 00:17:10.809 3.437 - 3.461: 98.7302% ( 3) 00:17:10.809 3.461 - 3.484: 98.7375% ( 1) 00:17:10.809 3.484 - 3.508: 98.7522% ( 2) 00:17:10.809 3.508 - 3.532: 98.7595% ( 1) 00:17:10.809 3.556 - 3.579: 98.7669% ( 1) 00:17:10.809 3.650 - 3.674: 98.7742% ( 1) 00:17:10.809 3.769 - 3.793: 98.7889% ( 2) 00:17:10.809 3.864 - 3.887: 98.7962% ( 1) 00:17:10.809 3.935 - 3.959: 98.8036% ( 1) 00:17:10.809 4.030 - 4.053: 98.8109% ( 1) 00:17:10.809 4.101 - 4.124: 98.8183% ( 1) 00:17:10.809 5.191 - 5.215: 98.8329% ( 2) 00:17:10.809 5.215 - 5.239: 98.8403% ( 1) 00:17:10.809 5.476 - 5.499: 98.8476% ( 1) 00:17:10.809 5.760 - 5.784: 98.8550% ( 1) 00:17:10.809 6.021 - 6.044: 98.8623% ( 1) 00:17:10.809 6.116 - 6.163: 98.8696% ( 1) 00:17:10.809 6.163 - 6.210: 98.8770% ( 1) 00:17:10.809 6.353 - 6.400: 98.8843% ( 1) 00:17:10.809 6.400 - 6.447: 98.8917% ( 1) 00:17:10.809 7.253 - 7.301: 98.8990% ( 1) 00:17:10.809 7.301 - 7.348: 98.9063% ( 1) 00:17:10.809 7.727 - 7.775: 98.9137% ( 1) 00:17:10.809 12.136 - 12.231: 98.9210% ( 1) 00:17:11.068 15.360 - 15.455: 9[2024-07-25 23:23:08.534064] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:11.068 8.9284% ( 1) 00:17:11.068 15.550 - 15.644: 98.9430% ( 2) 00:17:11.068 15.644 - 15.739: 98.9651% ( 3) 00:17:11.068 15.739 - 15.834: 98.9871% ( 3) 00:17:11.068 15.834 - 15.929: 99.0238% ( 5) 00:17:11.068 15.929 - 16.024: 99.0678% ( 6) 00:17:11.068 16.024 - 16.119: 99.0898% ( 3) 00:17:11.068 16.119 - 16.213: 99.1119% ( 3) 00:17:11.068 16.213 - 16.308: 99.1339% ( 3) 00:17:11.068 16.308 - 16.403: 99.1486% ( 2) 00:17:11.068 16.403 - 16.498: 99.1853% ( 5) 00:17:11.068 16.498 - 16.593: 99.2073% ( 3) 00:17:11.068 16.593 - 16.687: 99.2366% ( 4) 00:17:11.068 16.687 - 16.782: 99.2513% ( 2) 00:17:11.068 16.877 - 16.972: 99.2880% ( 5) 00:17:11.068 16.972 - 17.067: 99.3100% ( 3) 00:17:11.068 17.161 - 17.256: 99.3174% ( 1) 00:17:11.068 17.256 - 17.351: 99.3321% ( 2) 00:17:11.068 17.351 - 17.446: 99.3467% ( 2) 00:17:11.068 17.446 - 17.541: 99.3541% ( 1) 00:17:11.068 17.825 - 17.920: 99.3614% ( 1) 00:17:11.068 18.204 - 18.299: 99.3688% ( 1) 00:17:11.068 18.394 - 18.489: 99.3834% ( 2) 00:17:11.068 18.584 - 18.679: 99.3908% ( 1) 00:17:11.068 19.911 - 20.006: 99.3981% ( 1) 00:17:11.068 3980.705 - 4004.978: 99.9339% ( 73) 00:17:11.068 4004.978 - 4029.250: 100.0000% ( 9) 00:17:11.068 00:17:11.068 23:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:11.068 23:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:11.068 23:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:11.068 23:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:11.068 23:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:11.327 [ 00:17:11.327 { 00:17:11.327 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:11.327 "subtype": "Discovery", 00:17:11.327 "listen_addresses": [], 00:17:11.327 "allow_any_host": true, 00:17:11.327 "hosts": [] 00:17:11.327 }, 00:17:11.327 { 00:17:11.327 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:11.327 "subtype": "NVMe", 00:17:11.327 "listen_addresses": [ 00:17:11.327 { 00:17:11.327 "trtype": "VFIOUSER", 00:17:11.327 "adrfam": "IPv4", 00:17:11.327 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:11.327 "trsvcid": "0" 00:17:11.327 } 00:17:11.327 ], 00:17:11.327 "allow_any_host": true, 00:17:11.327 "hosts": [], 00:17:11.327 "serial_number": "SPDK1", 00:17:11.327 "model_number": "SPDK bdev Controller", 00:17:11.327 "max_namespaces": 32, 00:17:11.327 "min_cntlid": 1, 00:17:11.327 "max_cntlid": 65519, 00:17:11.327 "namespaces": [ 00:17:11.327 { 00:17:11.327 "nsid": 1, 00:17:11.327 "bdev_name": "Malloc1", 00:17:11.327 "name": "Malloc1", 00:17:11.327 "nguid": "A1FD4CA47588441B8AAE0E0466DCB675", 00:17:11.327 "uuid": "a1fd4ca4-7588-441b-8aae-0e0466dcb675" 00:17:11.327 }, 00:17:11.327 { 00:17:11.327 "nsid": 2, 00:17:11.327 "bdev_name": "Malloc3", 00:17:11.327 "name": "Malloc3", 00:17:11.327 "nguid": "9D774A2A2DD04163ACDFD4475E61B420", 00:17:11.327 "uuid": "9d774a2a-2dd0-4163-acdf-d4475e61b420" 00:17:11.327 } 00:17:11.327 ] 00:17:11.327 }, 00:17:11.327 { 00:17:11.327 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:11.327 "subtype": "NVMe", 00:17:11.327 "listen_addresses": [ 00:17:11.327 { 00:17:11.327 "trtype": "VFIOUSER", 00:17:11.327 "adrfam": "IPv4", 00:17:11.327 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:11.327 "trsvcid": "0" 00:17:11.327 } 00:17:11.327 ], 00:17:11.327 "allow_any_host": true, 00:17:11.327 "hosts": [], 00:17:11.327 "serial_number": "SPDK2", 00:17:11.327 "model_number": "SPDK bdev Controller", 00:17:11.327 "max_namespaces": 32, 00:17:11.327 "min_cntlid": 1, 00:17:11.327 "max_cntlid": 65519, 00:17:11.327 "namespaces": [ 00:17:11.327 { 00:17:11.327 "nsid": 1, 00:17:11.327 "bdev_name": "Malloc2", 00:17:11.327 "name": "Malloc2", 00:17:11.327 "nguid": "51F208BB91704DC0B79A608A65F1482F", 00:17:11.327 "uuid": "51f208bb-9170-4dc0-b79a-608a65f1482f" 00:17:11.327 } 00:17:11.327 ] 00:17:11.327 } 00:17:11.327 ] 00:17:11.327 23:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:11.327 23:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1380424 00:17:11.327 23:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:11.327 23:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:11.327 23:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:11.327 23:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:11.327 23:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:11.327 23:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:11.327 23:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:11.327 23:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:11.327 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.327 [2024-07-25 23:23:08.995633] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:11.586 Malloc4 00:17:11.586 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:11.843 [2024-07-25 23:23:09.347347] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:11.843 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:11.843 Asynchronous Event Request test 00:17:11.843 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:11.843 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:11.843 Registering asynchronous event callbacks... 00:17:11.843 Starting namespace attribute notice tests for all controllers... 00:17:11.843 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:11.843 aer_cb - Changed Namespace 00:17:11.843 Cleaning up... 00:17:12.102 [ 00:17:12.102 { 00:17:12.102 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:12.102 "subtype": "Discovery", 00:17:12.102 "listen_addresses": [], 00:17:12.102 "allow_any_host": true, 00:17:12.102 "hosts": [] 00:17:12.102 }, 00:17:12.102 { 00:17:12.102 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:12.102 "subtype": "NVMe", 00:17:12.102 "listen_addresses": [ 00:17:12.102 { 00:17:12.102 "trtype": "VFIOUSER", 00:17:12.102 "adrfam": "IPv4", 00:17:12.102 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:12.102 "trsvcid": "0" 00:17:12.102 } 00:17:12.102 ], 00:17:12.102 "allow_any_host": true, 00:17:12.102 "hosts": [], 00:17:12.102 "serial_number": "SPDK1", 00:17:12.102 "model_number": "SPDK bdev Controller", 00:17:12.102 "max_namespaces": 32, 00:17:12.102 "min_cntlid": 1, 00:17:12.102 "max_cntlid": 65519, 00:17:12.102 "namespaces": [ 00:17:12.102 { 00:17:12.102 "nsid": 1, 00:17:12.102 "bdev_name": "Malloc1", 00:17:12.102 "name": "Malloc1", 00:17:12.102 "nguid": "A1FD4CA47588441B8AAE0E0466DCB675", 00:17:12.102 "uuid": "a1fd4ca4-7588-441b-8aae-0e0466dcb675" 00:17:12.102 }, 00:17:12.102 { 00:17:12.102 "nsid": 2, 00:17:12.102 "bdev_name": "Malloc3", 00:17:12.102 "name": "Malloc3", 00:17:12.102 "nguid": "9D774A2A2DD04163ACDFD4475E61B420", 00:17:12.102 "uuid": "9d774a2a-2dd0-4163-acdf-d4475e61b420" 00:17:12.102 } 00:17:12.102 ] 00:17:12.102 }, 00:17:12.102 { 00:17:12.102 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:12.102 "subtype": "NVMe", 00:17:12.102 "listen_addresses": [ 00:17:12.102 { 00:17:12.102 "trtype": "VFIOUSER", 00:17:12.102 "adrfam": "IPv4", 00:17:12.102 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:12.102 "trsvcid": "0" 00:17:12.102 } 00:17:12.102 ], 00:17:12.102 "allow_any_host": true, 00:17:12.102 "hosts": [], 00:17:12.102 "serial_number": "SPDK2", 00:17:12.102 "model_number": "SPDK bdev Controller", 00:17:12.102 "max_namespaces": 32, 00:17:12.102 "min_cntlid": 1, 00:17:12.102 "max_cntlid": 65519, 00:17:12.102 "namespaces": [ 00:17:12.102 { 00:17:12.102 "nsid": 1, 00:17:12.102 "bdev_name": "Malloc2", 00:17:12.102 "name": "Malloc2", 00:17:12.102 "nguid": "51F208BB91704DC0B79A608A65F1482F", 00:17:12.102 "uuid": "51f208bb-9170-4dc0-b79a-608a65f1482f" 00:17:12.102 }, 00:17:12.102 { 00:17:12.102 "nsid": 2, 00:17:12.102 "bdev_name": "Malloc4", 00:17:12.102 "name": "Malloc4", 00:17:12.102 "nguid": "ED2241369BB34D7FBA933F11B2C72093", 00:17:12.102 "uuid": "ed224136-9bb3-4d7f-ba93-3f11b2c72093" 00:17:12.102 } 00:17:12.102 ] 00:17:12.102 } 00:17:12.102 ] 00:17:12.102 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1380424 00:17:12.102 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:12.102 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1374836 00:17:12.102 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1374836 ']' 00:17:12.102 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1374836 00:17:12.102 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:12.102 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:12.102 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1374836 00:17:12.102 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:12.102 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:12.102 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1374836' 00:17:12.102 killing process with pid 1374836 00:17:12.102 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1374836 00:17:12.102 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1374836 00:17:12.361 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:12.361 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:12.361 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:12.361 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:12.361 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:12.361 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1380556 00:17:12.361 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:12.361 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1380556' 00:17:12.361 Process pid: 1380556 00:17:12.361 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:12.361 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1380556 00:17:12.361 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1380556 ']' 00:17:12.361 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.361 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:12.361 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.361 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:12.361 23:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:12.361 [2024-07-25 23:23:10.023608] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:12.361 [2024-07-25 23:23:10.024775] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:17:12.361 [2024-07-25 23:23:10.024851] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.361 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.361 [2024-07-25 23:23:10.059422] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:12.620 [2024-07-25 23:23:10.090231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:12.620 [2024-07-25 23:23:10.182390] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.620 [2024-07-25 23:23:10.182447] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.620 [2024-07-25 23:23:10.182475] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.620 [2024-07-25 23:23:10.182488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.620 [2024-07-25 23:23:10.182500] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.620 [2024-07-25 23:23:10.182583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.620 [2024-07-25 23:23:10.182652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.620 [2024-07-25 23:23:10.186081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:12.620 [2024-07-25 23:23:10.186108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.620 [2024-07-25 23:23:10.294569] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:12.620 [2024-07-25 23:23:10.294812] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:12.620 [2024-07-25 23:23:10.295052] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:12.620 [2024-07-25 23:23:10.295670] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:12.620 [2024-07-25 23:23:10.295898] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:12.620 23:23:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:12.620 23:23:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:12.620 23:23:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:13.999 23:23:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:13.999 23:23:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:13.999 23:23:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:13.999 23:23:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:13.999 23:23:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:13.999 23:23:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:14.257 Malloc1 00:17:14.257 23:23:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:14.515 23:23:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:14.773 23:23:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:15.031 23:23:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:15.031 23:23:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:15.031 23:23:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:15.289 Malloc2 00:17:15.289 23:23:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:15.854 23:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:15.854 23:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:16.110 23:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:16.110 23:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1380556 00:17:16.110 23:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1380556 ']' 00:17:16.110 23:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1380556 00:17:16.110 23:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:16.110 23:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:16.110 23:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1380556 00:17:16.110 23:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:16.110 23:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:16.110 23:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1380556' 00:17:16.110 killing process with pid 1380556 00:17:16.110 23:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1380556 00:17:16.110 23:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1380556 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:16.678 00:17:16.678 real 0m52.612s 00:17:16.678 user 3m27.567s 00:17:16.678 sys 0m4.356s 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:16.678 ************************************ 00:17:16.678 END TEST nvmf_vfio_user 00:17:16.678 ************************************ 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:16.678 ************************************ 00:17:16.678 START TEST nvmf_vfio_user_nvme_compliance 00:17:16.678 ************************************ 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:16.678 * Looking for test storage... 00:17:16.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:16.678 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:16.679 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1381051 00:17:16.679 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:16.679 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1381051' 00:17:16.679 Process pid: 1381051 00:17:16.679 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:16.679 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1381051 00:17:16.679 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1381051 ']' 00:17:16.679 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.679 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:16.679 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.679 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:16.679 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:16.679 [2024-07-25 23:23:14.278158] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:17:16.679 [2024-07-25 23:23:14.278239] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.679 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.679 [2024-07-25 23:23:14.310275] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:16.679 [2024-07-25 23:23:14.336622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:16.938 [2024-07-25 23:23:14.423453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.938 [2024-07-25 23:23:14.423502] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.938 [2024-07-25 23:23:14.423527] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.938 [2024-07-25 23:23:14.423539] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.938 [2024-07-25 23:23:14.423549] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.938 [2024-07-25 23:23:14.423706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.938 [2024-07-25 23:23:14.423760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.938 [2024-07-25 23:23:14.423762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.938 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:16.938 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:17:16.938 23:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:17.876 malloc0 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.876 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:18.134 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.134 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:18.134 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.134 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:18.134 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.134 23:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:18.134 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.134 00:17:18.134 00:17:18.134 CUnit - A unit testing framework for C - Version 2.1-3 00:17:18.134 http://cunit.sourceforge.net/ 00:17:18.134 00:17:18.134 00:17:18.134 Suite: nvme_compliance 00:17:18.134 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 23:23:15.771526] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:18.134 [2024-07-25 23:23:15.773002] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:18.134 [2024-07-25 23:23:15.773042] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:18.134 [2024-07-25 23:23:15.773055] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:18.134 [2024-07-25 23:23:15.774539] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:18.134 passed 00:17:18.134 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 23:23:15.859143] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:18.393 [2024-07-25 23:23:15.862161] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:18.393 passed 00:17:18.393 Test: admin_identify_ns ...[2024-07-25 23:23:15.948572] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:18.393 [2024-07-25 23:23:16.008092] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:18.393 [2024-07-25 23:23:16.016073] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:18.393 [2024-07-25 23:23:16.037202] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:18.393 passed 00:17:18.653 Test: admin_get_features_mandatory_features ...[2024-07-25 23:23:16.120300] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:18.653 [2024-07-25 23:23:16.123325] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:18.653 passed 00:17:18.653 Test: admin_get_features_optional_features ...[2024-07-25 23:23:16.206872] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:18.653 [2024-07-25 23:23:16.209893] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:18.653 passed 00:17:18.653 Test: admin_set_features_number_of_queues ...[2024-07-25 23:23:16.293620] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:18.911 [2024-07-25 23:23:16.398172] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:18.911 passed 00:17:18.911 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 23:23:16.480308] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:18.911 [2024-07-25 23:23:16.483329] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:18.911 passed 00:17:18.911 Test: admin_get_log_page_with_lpo ...[2024-07-25 23:23:16.567567] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:18.911 [2024-07-25 23:23:16.635077] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:19.169 [2024-07-25 23:23:16.648149] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:19.169 passed 00:17:19.169 Test: fabric_property_get ...[2024-07-25 23:23:16.729394] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:19.169 [2024-07-25 23:23:16.730660] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:19.169 [2024-07-25 23:23:16.734428] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:19.169 passed 00:17:19.169 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 23:23:16.819979] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:19.169 [2024-07-25 23:23:16.821288] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:19.169 [2024-07-25 23:23:16.822999] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:19.169 passed 00:17:19.429 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 23:23:16.904186] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:19.429 [2024-07-25 23:23:16.988082] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:19.429 [2024-07-25 23:23:17.004075] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:19.429 [2024-07-25 23:23:17.009178] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:19.429 passed 00:17:19.429 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 23:23:17.091275] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:19.429 [2024-07-25 23:23:17.092560] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:19.429 [2024-07-25 23:23:17.094294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:19.429 passed 00:17:19.689 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 23:23:17.178838] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:19.689 [2024-07-25 23:23:17.257072] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:19.689 [2024-07-25 23:23:17.280069] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:19.689 [2024-07-25 23:23:17.285173] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:19.689 passed 00:17:19.689 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 23:23:17.367424] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:19.690 [2024-07-25 23:23:17.368714] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:19.690 [2024-07-25 23:23:17.368750] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:19.690 [2024-07-25 23:23:17.370462] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:19.690 passed 00:17:19.950 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 23:23:17.452690] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:19.950 [2024-07-25 23:23:17.544085] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:19.950 [2024-07-25 23:23:17.552080] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:19.950 [2024-07-25 23:23:17.560081] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:19.950 [2024-07-25 23:23:17.568068] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:19.950 [2024-07-25 23:23:17.597197] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:19.950 passed 00:17:20.210 Test: admin_create_io_sq_verify_pc ...[2024-07-25 23:23:17.683410] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:20.210 [2024-07-25 23:23:17.700093] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:20.210 [2024-07-25 23:23:17.717753] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:20.210 passed 00:17:20.210 Test: admin_create_io_qp_max_qps ...[2024-07-25 23:23:17.800322] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:21.615 [2024-07-25 23:23:18.900076] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:21.615 [2024-07-25 23:23:19.285851] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:21.615 passed 00:17:21.874 Test: admin_create_io_sq_shared_cq ...[2024-07-25 23:23:19.369190] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:21.874 [2024-07-25 23:23:19.500094] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:21.874 [2024-07-25 23:23:19.538160] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:21.874 passed 00:17:21.874 00:17:21.874 Run Summary: Type Total Ran Passed Failed Inactive 00:17:21.874 suites 1 1 n/a 0 0 00:17:21.874 tests 18 18 18 0 0 00:17:21.874 asserts 360 360 360 0 n/a 00:17:21.874 00:17:21.874 Elapsed time = 1.564 seconds 00:17:21.874 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1381051 00:17:21.874 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1381051 ']' 00:17:21.874 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1381051 00:17:21.874 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:17:21.874 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:21.874 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1381051 00:17:22.132 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:22.132 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:22.132 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1381051' 00:17:22.132 killing process with pid 1381051 00:17:22.132 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1381051 00:17:22.132 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1381051 00:17:22.391 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:22.391 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:22.391 00:17:22.391 real 0m5.707s 00:17:22.391 user 0m16.061s 00:17:22.391 sys 0m0.547s 00:17:22.391 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:22.391 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:22.391 ************************************ 00:17:22.391 END TEST nvmf_vfio_user_nvme_compliance 00:17:22.391 ************************************ 00:17:22.391 23:23:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:22.391 23:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:22.391 23:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:22.392 ************************************ 00:17:22.392 START TEST nvmf_vfio_user_fuzz 00:17:22.392 ************************************ 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:22.392 * Looking for test storage... 00:17:22.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1381776 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1381776' 00:17:22.392 Process pid: 1381776 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1381776 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1381776 ']' 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:22.392 23:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:22.652 23:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:22.652 23:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:17:22.652 23:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:23.588 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:23.588 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.588 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:23.588 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.588 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:23.588 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:23.588 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.588 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:23.849 malloc0 00:17:23.849 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.849 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:23.849 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.849 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:23.849 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.849 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:23.849 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.849 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:23.849 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.849 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:23.849 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.849 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:23.849 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.849 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:23.849 23:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:55.925 Fuzzing completed. Shutting down the fuzz application 00:17:55.925 00:17:55.925 Dumping successful admin opcodes: 00:17:55.925 8, 9, 10, 24, 00:17:55.925 Dumping successful io opcodes: 00:17:55.925 0, 00:17:55.925 NS: 0x200003a1ef00 I/O qp, Total commands completed: 618694, total successful commands: 2388, random_seed: 213514176 00:17:55.925 NS: 0x200003a1ef00 admin qp, Total commands completed: 152940, total successful commands: 1235, random_seed: 1410985152 00:17:55.925 23:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:55.925 23:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.925 23:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:55.925 23:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.925 23:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1381776 00:17:55.925 23:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1381776 ']' 00:17:55.925 23:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1381776 00:17:55.925 23:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:17:55.925 23:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:55.925 23:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1381776 00:17:55.925 23:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:55.925 23:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:55.925 23:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1381776' 00:17:55.925 killing process with pid 1381776 00:17:55.925 23:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1381776 00:17:55.925 23:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1381776 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:55.925 00:17:55.925 real 0m32.230s 00:17:55.925 user 0m32.189s 00:17:55.925 sys 0m26.736s 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:55.925 ************************************ 00:17:55.925 END TEST nvmf_vfio_user_fuzz 00:17:55.925 ************************************ 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:55.925 ************************************ 00:17:55.925 START TEST nvmf_auth_target 00:17:55.925 ************************************ 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:55.925 * Looking for test storage... 00:17:55.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.925 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:55.926 23:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.491 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:56.491 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:56.491 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:56.491 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:56.491 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:56.491 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:56.491 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:56.491 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:56.492 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:56.492 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:56.492 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:56.492 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:56.492 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:56.493 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:56.493 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:56.493 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:56.493 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:56.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:17:56.493 00:17:56.493 --- 10.0.0.2 ping statistics --- 00:17:56.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.493 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:17:56.752 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:56.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:17:56.752 00:17:56.752 --- 10.0.0.1 ping statistics --- 00:17:56.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.752 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:17:56.752 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.752 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:56.752 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:56.752 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.752 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:56.752 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:56.752 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.752 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:56.752 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:56.752 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:56.752 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:56.752 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:56.752 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.752 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1387196 00:17:56.753 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:56.753 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1387196 00:17:56.753 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1387196 ']' 00:17:56.753 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.753 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:56.753 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.753 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:56.753 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1387223 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=30f19537042d7476e3bcb7c5d632a3beee1a6df0bbce4ff9 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xhg 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 30f19537042d7476e3bcb7c5d632a3beee1a6df0bbce4ff9 0 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 30f19537042d7476e3bcb7c5d632a3beee1a6df0bbce4ff9 0 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=30f19537042d7476e3bcb7c5d632a3beee1a6df0bbce4ff9 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xhg 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xhg 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.xhg 00:17:57.010 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c95ce80d130449766046df47d9fa105503538e6f42fe0da326f58a33a3ea2502 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.FOH 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c95ce80d130449766046df47d9fa105503538e6f42fe0da326f58a33a3ea2502 3 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c95ce80d130449766046df47d9fa105503538e6f42fe0da326f58a33a3ea2502 3 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c95ce80d130449766046df47d9fa105503538e6f42fe0da326f58a33a3ea2502 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.FOH 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.FOH 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.FOH 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bc4c4595cfd24b1f73741040057d0fa0 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Ko2 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bc4c4595cfd24b1f73741040057d0fa0 1 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bc4c4595cfd24b1f73741040057d0fa0 1 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bc4c4595cfd24b1f73741040057d0fa0 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:57.011 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Ko2 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Ko2 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Ko2 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a0c434a528dd17f5a47ed6be772dcb585e218d616e585103 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.dbv 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a0c434a528dd17f5a47ed6be772dcb585e218d616e585103 2 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a0c434a528dd17f5a47ed6be772dcb585e218d616e585103 2 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a0c434a528dd17f5a47ed6be772dcb585e218d616e585103 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.dbv 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.dbv 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.dbv 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=25df6a174466f01173c39df30d5ffaa8ecef91ff79529f3e 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.C1J 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 25df6a174466f01173c39df30d5ffaa8ecef91ff79529f3e 2 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 25df6a174466f01173c39df30d5ffaa8ecef91ff79529f3e 2 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=25df6a174466f01173c39df30d5ffaa8ecef91ff79529f3e 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.C1J 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.C1J 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.C1J 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3c06330b3c7ee44d83d153bb56de06e4 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.eOw 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3c06330b3c7ee44d83d153bb56de06e4 1 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3c06330b3c7ee44d83d153bb56de06e4 1 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3c06330b3c7ee44d83d153bb56de06e4 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.eOw 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.eOw 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.eOw 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=32e06d686db87140dff8f29bded838f0291139716b998047c8d5e8518aec2cf6 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gDw 00:17:57.269 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 32e06d686db87140dff8f29bded838f0291139716b998047c8d5e8518aec2cf6 3 00:17:57.270 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 32e06d686db87140dff8f29bded838f0291139716b998047c8d5e8518aec2cf6 3 00:17:57.270 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:57.270 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:57.270 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=32e06d686db87140dff8f29bded838f0291139716b998047c8d5e8518aec2cf6 00:17:57.270 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:57.270 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:57.270 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gDw 00:17:57.270 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gDw 00:17:57.270 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.gDw 00:17:57.270 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:57.270 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1387196 00:17:57.270 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1387196 ']' 00:17:57.270 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.270 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.270 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.270 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.270 23:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.528 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:57.528 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:57.528 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1387223 /var/tmp/host.sock 00:17:57.528 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1387223 ']' 00:17:57.528 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:57.528 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.528 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:57.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:57.528 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.528 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.786 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:57.786 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:57.786 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:57.786 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.786 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.786 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.043 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:58.043 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xhg 00:17:58.043 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.043 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.043 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.043 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.xhg 00:17:58.043 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.xhg 00:17:58.043 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.FOH ]] 00:17:58.043 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FOH 00:17:58.043 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.043 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.300 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.301 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FOH 00:17:58.301 23:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FOH 00:17:58.301 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:58.301 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Ko2 00:17:58.301 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.301 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.301 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.301 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Ko2 00:17:58.301 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Ko2 00:17:58.558 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.dbv ]] 00:17:58.558 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dbv 00:17:58.558 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.558 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.558 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.558 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dbv 00:17:58.558 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dbv 00:17:58.816 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:58.816 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.C1J 00:17:58.816 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.816 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.816 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.816 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.C1J 00:17:58.816 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.C1J 00:17:59.074 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.eOw ]] 00:17:59.074 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eOw 00:17:59.074 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.074 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.074 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.074 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eOw 00:17:59.074 23:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eOw 00:17:59.332 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:59.332 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gDw 00:17:59.332 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.332 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.332 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.332 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.gDw 00:17:59.332 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.gDw 00:17:59.590 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:59.590 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:59.590 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.590 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.590 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:59.590 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:59.849 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:59.849 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.849 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:59.849 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:59.849 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:59.849 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.849 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.849 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.849 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.849 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.849 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.849 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.417 00:18:00.417 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.417 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.417 23:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.418 23:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.418 23:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.418 23:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.418 23:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.418 23:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.418 23:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.418 { 00:18:00.418 "cntlid": 1, 00:18:00.418 "qid": 0, 00:18:00.418 "state": "enabled", 00:18:00.418 "thread": "nvmf_tgt_poll_group_000", 00:18:00.418 "listen_address": { 00:18:00.418 "trtype": "TCP", 00:18:00.418 "adrfam": "IPv4", 00:18:00.418 "traddr": "10.0.0.2", 00:18:00.418 "trsvcid": "4420" 00:18:00.418 }, 00:18:00.418 "peer_address": { 00:18:00.418 "trtype": "TCP", 00:18:00.418 "adrfam": "IPv4", 00:18:00.418 "traddr": "10.0.0.1", 00:18:00.418 "trsvcid": "52982" 00:18:00.418 }, 00:18:00.418 "auth": { 00:18:00.418 "state": "completed", 00:18:00.418 "digest": "sha256", 00:18:00.418 "dhgroup": "null" 00:18:00.418 } 00:18:00.418 } 00:18:00.418 ]' 00:18:00.418 23:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.418 23:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.676 23:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.676 23:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:00.676 23:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.676 23:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.676 23:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.676 23:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.934 23:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:18:01.872 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.872 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:01.872 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.872 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.872 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.872 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.872 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:01.872 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:02.129 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:02.129 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.129 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:02.129 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:02.129 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:02.129 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.129 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.129 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.129 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.129 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.129 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.129 23:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.386 00:18:02.386 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.386 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.387 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.644 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.644 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.644 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.644 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.644 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.644 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.644 { 00:18:02.644 "cntlid": 3, 00:18:02.644 "qid": 0, 00:18:02.644 "state": "enabled", 00:18:02.644 "thread": "nvmf_tgt_poll_group_000", 00:18:02.644 "listen_address": { 00:18:02.644 "trtype": "TCP", 00:18:02.644 "adrfam": "IPv4", 00:18:02.644 "traddr": "10.0.0.2", 00:18:02.644 "trsvcid": "4420" 00:18:02.644 }, 00:18:02.644 "peer_address": { 00:18:02.644 "trtype": "TCP", 00:18:02.644 "adrfam": "IPv4", 00:18:02.644 "traddr": "10.0.0.1", 00:18:02.644 "trsvcid": "38228" 00:18:02.644 }, 00:18:02.644 "auth": { 00:18:02.644 "state": "completed", 00:18:02.644 "digest": "sha256", 00:18:02.644 "dhgroup": "null" 00:18:02.644 } 00:18:02.644 } 00:18:02.644 ]' 00:18:02.644 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.644 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.644 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.901 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:02.901 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.901 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.901 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.901 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.158 23:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:18:04.093 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.093 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:04.093 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.093 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.093 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.093 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.093 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:04.093 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:04.350 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:04.350 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.350 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.350 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:04.350 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:04.350 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.350 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.350 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.350 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.350 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.350 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.350 23:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.608 00:18:04.608 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.608 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.608 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.866 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.866 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.866 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.866 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.866 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.866 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.866 { 00:18:04.866 "cntlid": 5, 00:18:04.866 "qid": 0, 00:18:04.866 "state": "enabled", 00:18:04.866 "thread": "nvmf_tgt_poll_group_000", 00:18:04.866 "listen_address": { 00:18:04.866 "trtype": "TCP", 00:18:04.866 "adrfam": "IPv4", 00:18:04.866 "traddr": "10.0.0.2", 00:18:04.866 "trsvcid": "4420" 00:18:04.866 }, 00:18:04.866 "peer_address": { 00:18:04.866 "trtype": "TCP", 00:18:04.866 "adrfam": "IPv4", 00:18:04.866 "traddr": "10.0.0.1", 00:18:04.866 "trsvcid": "38254" 00:18:04.866 }, 00:18:04.866 "auth": { 00:18:04.866 "state": "completed", 00:18:04.866 "digest": "sha256", 00:18:04.866 "dhgroup": "null" 00:18:04.866 } 00:18:04.866 } 00:18:04.866 ]' 00:18:04.866 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.866 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.866 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.866 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:04.866 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.866 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.866 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.866 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.182 23:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:18:06.137 23:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.137 23:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:06.137 23:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.137 23:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.137 23:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.137 23:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.137 23:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:06.137 23:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:06.395 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:06.395 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.395 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.395 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:06.395 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:06.395 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.395 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:06.395 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.395 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.395 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.395 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.395 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.964 00:18:06.964 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.964 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.964 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.964 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.964 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.964 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.964 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.964 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.964 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.964 { 00:18:06.964 "cntlid": 7, 00:18:06.964 "qid": 0, 00:18:06.964 "state": "enabled", 00:18:06.964 "thread": "nvmf_tgt_poll_group_000", 00:18:06.964 "listen_address": { 00:18:06.964 "trtype": "TCP", 00:18:06.964 "adrfam": "IPv4", 00:18:06.964 "traddr": "10.0.0.2", 00:18:06.964 "trsvcid": "4420" 00:18:06.964 }, 00:18:06.964 "peer_address": { 00:18:06.964 "trtype": "TCP", 00:18:06.964 "adrfam": "IPv4", 00:18:06.964 "traddr": "10.0.0.1", 00:18:06.964 "trsvcid": "38290" 00:18:06.964 }, 00:18:06.964 "auth": { 00:18:06.964 "state": "completed", 00:18:06.964 "digest": "sha256", 00:18:06.964 "dhgroup": "null" 00:18:06.964 } 00:18:06.964 } 00:18:06.964 ]' 00:18:06.964 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.221 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.221 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.221 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:07.221 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.221 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.221 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.222 23:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.479 23:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:18:08.414 23:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.414 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.414 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.414 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.414 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.414 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.414 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.414 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:08.414 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:08.673 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:08.673 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.673 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.673 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:08.673 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:08.673 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.673 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.673 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.673 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.673 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.673 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.673 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.931 00:18:08.931 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.931 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.931 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.189 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.189 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.189 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.189 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.189 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.189 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.189 { 00:18:09.189 "cntlid": 9, 00:18:09.189 "qid": 0, 00:18:09.189 "state": "enabled", 00:18:09.189 "thread": "nvmf_tgt_poll_group_000", 00:18:09.189 "listen_address": { 00:18:09.189 "trtype": "TCP", 00:18:09.189 "adrfam": "IPv4", 00:18:09.189 "traddr": "10.0.0.2", 00:18:09.189 "trsvcid": "4420" 00:18:09.189 }, 00:18:09.189 "peer_address": { 00:18:09.189 "trtype": "TCP", 00:18:09.189 "adrfam": "IPv4", 00:18:09.189 "traddr": "10.0.0.1", 00:18:09.189 "trsvcid": "38314" 00:18:09.189 }, 00:18:09.190 "auth": { 00:18:09.190 "state": "completed", 00:18:09.190 "digest": "sha256", 00:18:09.190 "dhgroup": "ffdhe2048" 00:18:09.190 } 00:18:09.190 } 00:18:09.190 ]' 00:18:09.190 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.447 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.447 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.447 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:09.447 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.447 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.447 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.447 23:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.706 23:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:18:10.642 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.642 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:10.642 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.642 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.642 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.642 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.642 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:10.642 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:10.900 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:10.900 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.900 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:10.900 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:10.900 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:10.900 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.901 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.901 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.901 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.901 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.901 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.901 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.158 00:18:11.158 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.158 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.158 23:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.416 23:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.416 23:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.416 23:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.416 23:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.416 23:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.416 23:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.416 { 00:18:11.416 "cntlid": 11, 00:18:11.416 "qid": 0, 00:18:11.416 "state": "enabled", 00:18:11.416 "thread": "nvmf_tgt_poll_group_000", 00:18:11.416 "listen_address": { 00:18:11.416 "trtype": "TCP", 00:18:11.416 "adrfam": "IPv4", 00:18:11.416 "traddr": "10.0.0.2", 00:18:11.416 "trsvcid": "4420" 00:18:11.416 }, 00:18:11.416 "peer_address": { 00:18:11.416 "trtype": "TCP", 00:18:11.416 "adrfam": "IPv4", 00:18:11.416 "traddr": "10.0.0.1", 00:18:11.416 "trsvcid": "38324" 00:18:11.416 }, 00:18:11.416 "auth": { 00:18:11.416 "state": "completed", 00:18:11.416 "digest": "sha256", 00:18:11.416 "dhgroup": "ffdhe2048" 00:18:11.416 } 00:18:11.416 } 00:18:11.416 ]' 00:18:11.416 23:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.674 23:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.674 23:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.674 23:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:11.674 23:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.674 23:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.674 23:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.674 23:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.931 23:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:18:12.866 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.866 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:12.866 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.866 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.866 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.866 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.866 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:12.866 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:13.124 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:13.124 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.124 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.124 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:13.124 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:13.124 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.124 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.124 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.124 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.124 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.124 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.124 23:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.381 00:18:13.381 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.381 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.381 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.639 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.639 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.639 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.639 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.639 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.639 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.639 { 00:18:13.639 "cntlid": 13, 00:18:13.639 "qid": 0, 00:18:13.639 "state": "enabled", 00:18:13.639 "thread": "nvmf_tgt_poll_group_000", 00:18:13.639 "listen_address": { 00:18:13.639 "trtype": "TCP", 00:18:13.639 "adrfam": "IPv4", 00:18:13.639 "traddr": "10.0.0.2", 00:18:13.639 "trsvcid": "4420" 00:18:13.639 }, 00:18:13.639 "peer_address": { 00:18:13.639 "trtype": "TCP", 00:18:13.639 "adrfam": "IPv4", 00:18:13.639 "traddr": "10.0.0.1", 00:18:13.639 "trsvcid": "56722" 00:18:13.639 }, 00:18:13.639 "auth": { 00:18:13.639 "state": "completed", 00:18:13.639 "digest": "sha256", 00:18:13.639 "dhgroup": "ffdhe2048" 00:18:13.639 } 00:18:13.639 } 00:18:13.639 ]' 00:18:13.639 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.639 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.639 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.898 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:13.898 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.898 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.898 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.898 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.156 23:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:18:15.091 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.091 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.091 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.091 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.091 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.091 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.091 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:15.091 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:15.349 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:15.349 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.349 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:15.349 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:15.349 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:15.349 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.349 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:15.349 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.349 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.349 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.349 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.349 23:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.607 00:18:15.607 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.607 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.607 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.864 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.864 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.864 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.864 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.864 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.864 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.864 { 00:18:15.864 "cntlid": 15, 00:18:15.864 "qid": 0, 00:18:15.864 "state": "enabled", 00:18:15.864 "thread": "nvmf_tgt_poll_group_000", 00:18:15.864 "listen_address": { 00:18:15.864 "trtype": "TCP", 00:18:15.864 "adrfam": "IPv4", 00:18:15.864 "traddr": "10.0.0.2", 00:18:15.864 "trsvcid": "4420" 00:18:15.864 }, 00:18:15.864 "peer_address": { 00:18:15.864 "trtype": "TCP", 00:18:15.864 "adrfam": "IPv4", 00:18:15.864 "traddr": "10.0.0.1", 00:18:15.864 "trsvcid": "56752" 00:18:15.864 }, 00:18:15.864 "auth": { 00:18:15.864 "state": "completed", 00:18:15.864 "digest": "sha256", 00:18:15.864 "dhgroup": "ffdhe2048" 00:18:15.864 } 00:18:15.864 } 00:18:15.864 ]' 00:18:15.864 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.864 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.864 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.864 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:15.864 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.864 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.864 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.864 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.121 23:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:18:17.493 23:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.493 23:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.493 23:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.493 23:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.493 23:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.493 23:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.493 23:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.493 23:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:17.493 23:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:17.493 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:17.494 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.494 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.494 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:17.494 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:17.494 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.494 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.494 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.494 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.494 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.494 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.494 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.751 00:18:17.751 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.751 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.751 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.008 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.008 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.008 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.008 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.008 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.008 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.009 { 00:18:18.009 "cntlid": 17, 00:18:18.009 "qid": 0, 00:18:18.009 "state": "enabled", 00:18:18.009 "thread": "nvmf_tgt_poll_group_000", 00:18:18.009 "listen_address": { 00:18:18.009 "trtype": "TCP", 00:18:18.009 "adrfam": "IPv4", 00:18:18.009 "traddr": "10.0.0.2", 00:18:18.009 "trsvcid": "4420" 00:18:18.009 }, 00:18:18.009 "peer_address": { 00:18:18.009 "trtype": "TCP", 00:18:18.009 "adrfam": "IPv4", 00:18:18.009 "traddr": "10.0.0.1", 00:18:18.009 "trsvcid": "56780" 00:18:18.009 }, 00:18:18.009 "auth": { 00:18:18.009 "state": "completed", 00:18:18.009 "digest": "sha256", 00:18:18.009 "dhgroup": "ffdhe3072" 00:18:18.009 } 00:18:18.009 } 00:18:18.009 ]' 00:18:18.009 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.009 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.009 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.266 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:18.266 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.266 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.266 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.266 23:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.523 23:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:18:19.459 23:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.459 23:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.459 23:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.459 23:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.459 23:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.459 23:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.459 23:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:19.459 23:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:19.717 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:19.717 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.717 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.717 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:19.717 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:19.717 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.717 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.717 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.718 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.718 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.718 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.718 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.976 00:18:19.976 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.976 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.976 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.234 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.234 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.234 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.234 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.234 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.234 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.234 { 00:18:20.234 "cntlid": 19, 00:18:20.234 "qid": 0, 00:18:20.234 "state": "enabled", 00:18:20.234 "thread": "nvmf_tgt_poll_group_000", 00:18:20.234 "listen_address": { 00:18:20.234 "trtype": "TCP", 00:18:20.234 "adrfam": "IPv4", 00:18:20.234 "traddr": "10.0.0.2", 00:18:20.234 "trsvcid": "4420" 00:18:20.234 }, 00:18:20.234 "peer_address": { 00:18:20.234 "trtype": "TCP", 00:18:20.234 "adrfam": "IPv4", 00:18:20.234 "traddr": "10.0.0.1", 00:18:20.234 "trsvcid": "56798" 00:18:20.234 }, 00:18:20.234 "auth": { 00:18:20.234 "state": "completed", 00:18:20.234 "digest": "sha256", 00:18:20.234 "dhgroup": "ffdhe3072" 00:18:20.234 } 00:18:20.234 } 00:18:20.234 ]' 00:18:20.234 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.234 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.234 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.492 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:20.492 23:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.492 23:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.492 23:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.492 23:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.751 23:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:18:21.687 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.687 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:21.687 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.687 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.687 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.687 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.687 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:21.687 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:21.945 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:21.945 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.945 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:21.945 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:21.945 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:21.945 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.945 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.945 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.945 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.945 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.945 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.945 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.203 00:18:22.203 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.203 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.203 23:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.461 23:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.461 23:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.461 23:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.462 23:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.462 23:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.462 23:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.462 { 00:18:22.462 "cntlid": 21, 00:18:22.462 "qid": 0, 00:18:22.462 "state": "enabled", 00:18:22.462 "thread": "nvmf_tgt_poll_group_000", 00:18:22.462 "listen_address": { 00:18:22.462 "trtype": "TCP", 00:18:22.462 "adrfam": "IPv4", 00:18:22.462 "traddr": "10.0.0.2", 00:18:22.462 "trsvcid": "4420" 00:18:22.462 }, 00:18:22.462 "peer_address": { 00:18:22.462 "trtype": "TCP", 00:18:22.462 "adrfam": "IPv4", 00:18:22.462 "traddr": "10.0.0.1", 00:18:22.462 "trsvcid": "52728" 00:18:22.462 }, 00:18:22.462 "auth": { 00:18:22.462 "state": "completed", 00:18:22.462 "digest": "sha256", 00:18:22.462 "dhgroup": "ffdhe3072" 00:18:22.462 } 00:18:22.462 } 00:18:22.462 ]' 00:18:22.462 23:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.462 23:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.462 23:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.728 23:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:22.728 23:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.728 23:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.728 23:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.728 23:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.018 23:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:18:23.962 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.962 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:23.962 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.962 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.962 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.962 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.962 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:23.962 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:24.220 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:24.220 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.220 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:24.220 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:24.220 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:24.220 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.220 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:24.220 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.220 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.220 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.220 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.220 23:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.478 00:18:24.478 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.478 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.478 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.736 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.736 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.736 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.736 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.736 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.736 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.736 { 00:18:24.736 "cntlid": 23, 00:18:24.736 "qid": 0, 00:18:24.736 "state": "enabled", 00:18:24.736 "thread": "nvmf_tgt_poll_group_000", 00:18:24.736 "listen_address": { 00:18:24.736 "trtype": "TCP", 00:18:24.736 "adrfam": "IPv4", 00:18:24.736 "traddr": "10.0.0.2", 00:18:24.736 "trsvcid": "4420" 00:18:24.736 }, 00:18:24.736 "peer_address": { 00:18:24.736 "trtype": "TCP", 00:18:24.736 "adrfam": "IPv4", 00:18:24.736 "traddr": "10.0.0.1", 00:18:24.736 "trsvcid": "52752" 00:18:24.736 }, 00:18:24.736 "auth": { 00:18:24.736 "state": "completed", 00:18:24.736 "digest": "sha256", 00:18:24.736 "dhgroup": "ffdhe3072" 00:18:24.736 } 00:18:24.736 } 00:18:24.736 ]' 00:18:24.736 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.736 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.736 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.736 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:24.736 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.994 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.994 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.994 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.251 23:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:18:26.187 23:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.187 23:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.187 23:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.187 23:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.187 23:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.187 23:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.187 23:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.187 23:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:26.187 23:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:26.444 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:26.444 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.445 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.445 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:26.445 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:26.445 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.445 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.445 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.445 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.445 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.445 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.445 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.011 00:18:27.011 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.011 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.011 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.269 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.269 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.269 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.269 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.269 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.269 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.269 { 00:18:27.269 "cntlid": 25, 00:18:27.269 "qid": 0, 00:18:27.269 "state": "enabled", 00:18:27.269 "thread": "nvmf_tgt_poll_group_000", 00:18:27.269 "listen_address": { 00:18:27.269 "trtype": "TCP", 00:18:27.269 "adrfam": "IPv4", 00:18:27.269 "traddr": "10.0.0.2", 00:18:27.269 "trsvcid": "4420" 00:18:27.269 }, 00:18:27.269 "peer_address": { 00:18:27.269 "trtype": "TCP", 00:18:27.269 "adrfam": "IPv4", 00:18:27.269 "traddr": "10.0.0.1", 00:18:27.269 "trsvcid": "52782" 00:18:27.269 }, 00:18:27.269 "auth": { 00:18:27.269 "state": "completed", 00:18:27.269 "digest": "sha256", 00:18:27.269 "dhgroup": "ffdhe4096" 00:18:27.269 } 00:18:27.269 } 00:18:27.269 ]' 00:18:27.269 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.269 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.269 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.269 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:27.269 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.269 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.269 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.269 23:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.529 23:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.906 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.476 00:18:29.476 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.476 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.476 23:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.476 23:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.476 23:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.476 23:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.476 23:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.734 23:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.734 23:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.734 { 00:18:29.734 "cntlid": 27, 00:18:29.734 "qid": 0, 00:18:29.734 "state": "enabled", 00:18:29.735 "thread": "nvmf_tgt_poll_group_000", 00:18:29.735 "listen_address": { 00:18:29.735 "trtype": "TCP", 00:18:29.735 "adrfam": "IPv4", 00:18:29.735 "traddr": "10.0.0.2", 00:18:29.735 "trsvcid": "4420" 00:18:29.735 }, 00:18:29.735 "peer_address": { 00:18:29.735 "trtype": "TCP", 00:18:29.735 "adrfam": "IPv4", 00:18:29.735 "traddr": "10.0.0.1", 00:18:29.735 "trsvcid": "52806" 00:18:29.735 }, 00:18:29.735 "auth": { 00:18:29.735 "state": "completed", 00:18:29.735 "digest": "sha256", 00:18:29.735 "dhgroup": "ffdhe4096" 00:18:29.735 } 00:18:29.735 } 00:18:29.735 ]' 00:18:29.735 23:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.735 23:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.735 23:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.735 23:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:29.735 23:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.735 23:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.735 23:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.735 23:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.993 23:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:18:30.930 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.930 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.930 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.930 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.930 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.930 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.930 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:30.930 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:31.188 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:31.188 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.188 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:31.188 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:31.188 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:31.188 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.188 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.188 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.188 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.188 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.188 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.188 23:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.756 00:18:31.756 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.756 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.757 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.014 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.014 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.014 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.014 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.014 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.014 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.014 { 00:18:32.014 "cntlid": 29, 00:18:32.015 "qid": 0, 00:18:32.015 "state": "enabled", 00:18:32.015 "thread": "nvmf_tgt_poll_group_000", 00:18:32.015 "listen_address": { 00:18:32.015 "trtype": "TCP", 00:18:32.015 "adrfam": "IPv4", 00:18:32.015 "traddr": "10.0.0.2", 00:18:32.015 "trsvcid": "4420" 00:18:32.015 }, 00:18:32.015 "peer_address": { 00:18:32.015 "trtype": "TCP", 00:18:32.015 "adrfam": "IPv4", 00:18:32.015 "traddr": "10.0.0.1", 00:18:32.015 "trsvcid": "52852" 00:18:32.015 }, 00:18:32.015 "auth": { 00:18:32.015 "state": "completed", 00:18:32.015 "digest": "sha256", 00:18:32.015 "dhgroup": "ffdhe4096" 00:18:32.015 } 00:18:32.015 } 00:18:32.015 ]' 00:18:32.015 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.015 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.015 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.015 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:32.015 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.015 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.015 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.015 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.274 23:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:18:33.211 23:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.211 23:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:33.211 23:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.211 23:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.470 23:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.470 23:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.470 23:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:33.470 23:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:33.728 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:33.728 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.728 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:33.728 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:33.728 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:33.728 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.728 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:33.728 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.728 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.728 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.728 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.728 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.987 00:18:33.987 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.987 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.987 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.245 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.245 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.245 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.245 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.245 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.245 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.245 { 00:18:34.245 "cntlid": 31, 00:18:34.245 "qid": 0, 00:18:34.245 "state": "enabled", 00:18:34.245 "thread": "nvmf_tgt_poll_group_000", 00:18:34.245 "listen_address": { 00:18:34.245 "trtype": "TCP", 00:18:34.245 "adrfam": "IPv4", 00:18:34.245 "traddr": "10.0.0.2", 00:18:34.245 "trsvcid": "4420" 00:18:34.245 }, 00:18:34.245 "peer_address": { 00:18:34.245 "trtype": "TCP", 00:18:34.245 "adrfam": "IPv4", 00:18:34.245 "traddr": "10.0.0.1", 00:18:34.245 "trsvcid": "49048" 00:18:34.245 }, 00:18:34.245 "auth": { 00:18:34.245 "state": "completed", 00:18:34.245 "digest": "sha256", 00:18:34.245 "dhgroup": "ffdhe4096" 00:18:34.245 } 00:18:34.245 } 00:18:34.245 ]' 00:18:34.245 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.245 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.245 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.245 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.245 23:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.504 23:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.504 23:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.504 23:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.765 23:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:18:35.704 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.704 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:35.704 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.704 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.704 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.704 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.704 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.704 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:35.704 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:35.961 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:35.961 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.961 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:35.961 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:35.961 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:35.961 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.961 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.961 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.961 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.961 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.961 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.961 23:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.528 00:18:36.528 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.528 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.528 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.786 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.786 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.786 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.786 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.786 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.786 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.786 { 00:18:36.786 "cntlid": 33, 00:18:36.786 "qid": 0, 00:18:36.786 "state": "enabled", 00:18:36.786 "thread": "nvmf_tgt_poll_group_000", 00:18:36.786 "listen_address": { 00:18:36.786 "trtype": "TCP", 00:18:36.786 "adrfam": "IPv4", 00:18:36.786 "traddr": "10.0.0.2", 00:18:36.786 "trsvcid": "4420" 00:18:36.786 }, 00:18:36.786 "peer_address": { 00:18:36.786 "trtype": "TCP", 00:18:36.786 "adrfam": "IPv4", 00:18:36.786 "traddr": "10.0.0.1", 00:18:36.786 "trsvcid": "49072" 00:18:36.786 }, 00:18:36.786 "auth": { 00:18:36.786 "state": "completed", 00:18:36.786 "digest": "sha256", 00:18:36.786 "dhgroup": "ffdhe6144" 00:18:36.786 } 00:18:36.786 } 00:18:36.786 ]' 00:18:36.786 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.786 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.786 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.786 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:36.786 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.786 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.786 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.786 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.043 23:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:18:37.980 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.980 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.980 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.980 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.980 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.980 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.980 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:37.980 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:38.237 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:38.237 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.237 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:38.237 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:38.237 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:38.237 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.237 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.237 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.237 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.237 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.237 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.237 23:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.804 00:18:38.804 23:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.804 23:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.804 23:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.092 23:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.093 23:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.093 23:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.093 23:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.093 23:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.093 23:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.093 { 00:18:39.093 "cntlid": 35, 00:18:39.093 "qid": 0, 00:18:39.093 "state": "enabled", 00:18:39.093 "thread": "nvmf_tgt_poll_group_000", 00:18:39.093 "listen_address": { 00:18:39.093 "trtype": "TCP", 00:18:39.093 "adrfam": "IPv4", 00:18:39.093 "traddr": "10.0.0.2", 00:18:39.093 "trsvcid": "4420" 00:18:39.093 }, 00:18:39.093 "peer_address": { 00:18:39.093 "trtype": "TCP", 00:18:39.093 "adrfam": "IPv4", 00:18:39.093 "traddr": "10.0.0.1", 00:18:39.093 "trsvcid": "49092" 00:18:39.093 }, 00:18:39.093 "auth": { 00:18:39.093 "state": "completed", 00:18:39.093 "digest": "sha256", 00:18:39.093 "dhgroup": "ffdhe6144" 00:18:39.093 } 00:18:39.093 } 00:18:39.093 ]' 00:18:39.093 23:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.093 23:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.093 23:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.093 23:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:39.093 23:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.093 23:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.093 23:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.093 23:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.352 23:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.728 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.320 00:18:41.320 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.320 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.320 23:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.578 23:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.578 23:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.578 23:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.578 23:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.578 23:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.578 23:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.578 { 00:18:41.578 "cntlid": 37, 00:18:41.578 "qid": 0, 00:18:41.578 "state": "enabled", 00:18:41.578 "thread": "nvmf_tgt_poll_group_000", 00:18:41.578 "listen_address": { 00:18:41.578 "trtype": "TCP", 00:18:41.578 "adrfam": "IPv4", 00:18:41.578 "traddr": "10.0.0.2", 00:18:41.578 "trsvcid": "4420" 00:18:41.578 }, 00:18:41.578 "peer_address": { 00:18:41.578 "trtype": "TCP", 00:18:41.578 "adrfam": "IPv4", 00:18:41.578 "traddr": "10.0.0.1", 00:18:41.578 "trsvcid": "49116" 00:18:41.578 }, 00:18:41.578 "auth": { 00:18:41.578 "state": "completed", 00:18:41.578 "digest": "sha256", 00:18:41.578 "dhgroup": "ffdhe6144" 00:18:41.578 } 00:18:41.578 } 00:18:41.578 ]' 00:18:41.578 23:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.578 23:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.578 23:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.578 23:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:41.578 23:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.578 23:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.578 23:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.578 23:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.835 23:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:18:42.769 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.769 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.769 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.769 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.769 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.769 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.769 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:42.769 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:43.026 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:43.026 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.026 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:43.026 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:43.026 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:43.026 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.026 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:43.026 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.026 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.026 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.026 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.026 23:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.592 00:18:43.592 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.592 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.592 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.849 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.849 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.849 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.849 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.849 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.849 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.849 { 00:18:43.849 "cntlid": 39, 00:18:43.849 "qid": 0, 00:18:43.849 "state": "enabled", 00:18:43.849 "thread": "nvmf_tgt_poll_group_000", 00:18:43.849 "listen_address": { 00:18:43.849 "trtype": "TCP", 00:18:43.849 "adrfam": "IPv4", 00:18:43.849 "traddr": "10.0.0.2", 00:18:43.849 "trsvcid": "4420" 00:18:43.849 }, 00:18:43.849 "peer_address": { 00:18:43.849 "trtype": "TCP", 00:18:43.849 "adrfam": "IPv4", 00:18:43.849 "traddr": "10.0.0.1", 00:18:43.849 "trsvcid": "45670" 00:18:43.849 }, 00:18:43.849 "auth": { 00:18:43.849 "state": "completed", 00:18:43.849 "digest": "sha256", 00:18:43.849 "dhgroup": "ffdhe6144" 00:18:43.849 } 00:18:43.849 } 00:18:43.849 ]' 00:18:43.849 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.849 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.849 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.107 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:44.107 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.107 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.107 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.107 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.363 23:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:18:45.296 23:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.296 23:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:45.296 23:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.296 23:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.296 23:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.296 23:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.296 23:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.296 23:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:45.296 23:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:45.552 23:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:45.552 23:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.552 23:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:45.553 23:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:45.553 23:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:45.553 23:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.553 23:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.553 23:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.553 23:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.553 23:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.553 23:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.553 23:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.485 00:18:46.485 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.485 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.485 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.741 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.741 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.741 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.741 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.741 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.741 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.741 { 00:18:46.741 "cntlid": 41, 00:18:46.741 "qid": 0, 00:18:46.741 "state": "enabled", 00:18:46.741 "thread": "nvmf_tgt_poll_group_000", 00:18:46.741 "listen_address": { 00:18:46.741 "trtype": "TCP", 00:18:46.741 "adrfam": "IPv4", 00:18:46.741 "traddr": "10.0.0.2", 00:18:46.741 "trsvcid": "4420" 00:18:46.741 }, 00:18:46.741 "peer_address": { 00:18:46.741 "trtype": "TCP", 00:18:46.741 "adrfam": "IPv4", 00:18:46.741 "traddr": "10.0.0.1", 00:18:46.741 "trsvcid": "45700" 00:18:46.741 }, 00:18:46.741 "auth": { 00:18:46.741 "state": "completed", 00:18:46.741 "digest": "sha256", 00:18:46.741 "dhgroup": "ffdhe8192" 00:18:46.741 } 00:18:46.741 } 00:18:46.741 ]' 00:18:46.741 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.742 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.742 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.742 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:46.742 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.742 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.742 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.742 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.999 23:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:18:47.930 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.930 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.930 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.930 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.930 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.930 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.930 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:47.930 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:48.497 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:48.497 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.497 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:48.497 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:48.497 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:48.497 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.497 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.498 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.498 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.498 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.498 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.498 23:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.434 00:18:49.434 23:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.434 23:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.434 23:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.434 23:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.434 23:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.434 23:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.434 23:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.434 23:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.434 23:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.434 { 00:18:49.434 "cntlid": 43, 00:18:49.434 "qid": 0, 00:18:49.434 "state": "enabled", 00:18:49.434 "thread": "nvmf_tgt_poll_group_000", 00:18:49.434 "listen_address": { 00:18:49.434 "trtype": "TCP", 00:18:49.434 "adrfam": "IPv4", 00:18:49.434 "traddr": "10.0.0.2", 00:18:49.434 "trsvcid": "4420" 00:18:49.434 }, 00:18:49.434 "peer_address": { 00:18:49.434 "trtype": "TCP", 00:18:49.434 "adrfam": "IPv4", 00:18:49.434 "traddr": "10.0.0.1", 00:18:49.434 "trsvcid": "45718" 00:18:49.434 }, 00:18:49.434 "auth": { 00:18:49.434 "state": "completed", 00:18:49.434 "digest": "sha256", 00:18:49.434 "dhgroup": "ffdhe8192" 00:18:49.434 } 00:18:49.434 } 00:18:49.434 ]' 00:18:49.434 23:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.434 23:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.434 23:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.434 23:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:49.434 23:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.692 23:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.692 23:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.692 23:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.950 23:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:18:50.884 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.884 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.884 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.884 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.884 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.884 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.884 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:50.884 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:51.142 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:51.142 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.142 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.142 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:51.142 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:51.142 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.142 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.142 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.142 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.142 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.142 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.142 23:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.081 00:18:52.081 23:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.081 23:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.081 23:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.081 23:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.081 23:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.081 23:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.081 23:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.339 23:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.339 23:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.339 { 00:18:52.339 "cntlid": 45, 00:18:52.339 "qid": 0, 00:18:52.339 "state": "enabled", 00:18:52.339 "thread": "nvmf_tgt_poll_group_000", 00:18:52.339 "listen_address": { 00:18:52.339 "trtype": "TCP", 00:18:52.339 "adrfam": "IPv4", 00:18:52.339 "traddr": "10.0.0.2", 00:18:52.339 "trsvcid": "4420" 00:18:52.339 }, 00:18:52.339 "peer_address": { 00:18:52.339 "trtype": "TCP", 00:18:52.339 "adrfam": "IPv4", 00:18:52.339 "traddr": "10.0.0.1", 00:18:52.339 "trsvcid": "45726" 00:18:52.339 }, 00:18:52.339 "auth": { 00:18:52.339 "state": "completed", 00:18:52.339 "digest": "sha256", 00:18:52.339 "dhgroup": "ffdhe8192" 00:18:52.339 } 00:18:52.339 } 00:18:52.339 ]' 00:18:52.339 23:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.339 23:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.339 23:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.339 23:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:52.339 23:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.339 23:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.339 23:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.339 23:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.597 23:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:18:53.533 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.533 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.533 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.533 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.533 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.533 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.533 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:53.533 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:53.791 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:53.791 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.791 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.791 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:53.791 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:53.791 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.791 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:53.791 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.791 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.791 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.791 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.791 23:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.729 00:18:54.729 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.729 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.729 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.986 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.986 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.986 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.986 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.986 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.986 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.986 { 00:18:54.986 "cntlid": 47, 00:18:54.986 "qid": 0, 00:18:54.986 "state": "enabled", 00:18:54.986 "thread": "nvmf_tgt_poll_group_000", 00:18:54.986 "listen_address": { 00:18:54.986 "trtype": "TCP", 00:18:54.986 "adrfam": "IPv4", 00:18:54.986 "traddr": "10.0.0.2", 00:18:54.986 "trsvcid": "4420" 00:18:54.986 }, 00:18:54.986 "peer_address": { 00:18:54.986 "trtype": "TCP", 00:18:54.986 "adrfam": "IPv4", 00:18:54.986 "traddr": "10.0.0.1", 00:18:54.986 "trsvcid": "55518" 00:18:54.986 }, 00:18:54.986 "auth": { 00:18:54.986 "state": "completed", 00:18:54.986 "digest": "sha256", 00:18:54.986 "dhgroup": "ffdhe8192" 00:18:54.986 } 00:18:54.986 } 00:18:54.986 ]' 00:18:54.986 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.986 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.986 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.986 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:54.986 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.986 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.986 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.986 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.244 23:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:18:56.188 23:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.446 23:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:56.446 23:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.446 23:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.446 23:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.446 23:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:56.446 23:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.446 23:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.446 23:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:56.446 23:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:56.704 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:56.704 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.704 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:56.704 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:56.704 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:56.704 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.704 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.704 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.704 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.704 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.704 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.704 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.961 00:18:56.961 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.961 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.961 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.218 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.218 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.218 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.218 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.218 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.218 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.218 { 00:18:57.218 "cntlid": 49, 00:18:57.218 "qid": 0, 00:18:57.218 "state": "enabled", 00:18:57.218 "thread": "nvmf_tgt_poll_group_000", 00:18:57.218 "listen_address": { 00:18:57.218 "trtype": "TCP", 00:18:57.218 "adrfam": "IPv4", 00:18:57.218 "traddr": "10.0.0.2", 00:18:57.218 "trsvcid": "4420" 00:18:57.218 }, 00:18:57.218 "peer_address": { 00:18:57.218 "trtype": "TCP", 00:18:57.218 "adrfam": "IPv4", 00:18:57.218 "traddr": "10.0.0.1", 00:18:57.218 "trsvcid": "55546" 00:18:57.218 }, 00:18:57.218 "auth": { 00:18:57.218 "state": "completed", 00:18:57.218 "digest": "sha384", 00:18:57.218 "dhgroup": "null" 00:18:57.218 } 00:18:57.218 } 00:18:57.218 ]' 00:18:57.218 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.218 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.218 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.218 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:57.218 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.218 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.218 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.218 23:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.477 23:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:18:58.413 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.413 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.413 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.413 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.413 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.413 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.413 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:58.413 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:58.672 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:58.672 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.672 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.672 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:58.672 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:58.672 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.672 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.672 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.672 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.931 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.931 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.931 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.222 00:18:59.223 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.223 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.223 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.480 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.480 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.480 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.480 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.480 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.480 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.480 { 00:18:59.480 "cntlid": 51, 00:18:59.480 "qid": 0, 00:18:59.480 "state": "enabled", 00:18:59.480 "thread": "nvmf_tgt_poll_group_000", 00:18:59.480 "listen_address": { 00:18:59.480 "trtype": "TCP", 00:18:59.480 "adrfam": "IPv4", 00:18:59.480 "traddr": "10.0.0.2", 00:18:59.480 "trsvcid": "4420" 00:18:59.480 }, 00:18:59.480 "peer_address": { 00:18:59.480 "trtype": "TCP", 00:18:59.480 "adrfam": "IPv4", 00:18:59.480 "traddr": "10.0.0.1", 00:18:59.480 "trsvcid": "55584" 00:18:59.480 }, 00:18:59.480 "auth": { 00:18:59.480 "state": "completed", 00:18:59.480 "digest": "sha384", 00:18:59.480 "dhgroup": "null" 00:18:59.480 } 00:18:59.480 } 00:18:59.480 ]' 00:18:59.480 23:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.480 23:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.480 23:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.480 23:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:59.480 23:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.480 23:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.480 23:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.480 23:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.738 23:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:19:00.674 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.674 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.674 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.674 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.674 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.674 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.674 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:00.674 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:00.932 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:00.932 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.932 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:00.932 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:00.932 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:00.932 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.932 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.932 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.932 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.932 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.932 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.932 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.190 00:19:01.190 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.190 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.190 23:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.449 23:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.449 23:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.449 23:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.449 23:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.449 23:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.449 23:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.449 { 00:19:01.449 "cntlid": 53, 00:19:01.449 "qid": 0, 00:19:01.449 "state": "enabled", 00:19:01.449 "thread": "nvmf_tgt_poll_group_000", 00:19:01.449 "listen_address": { 00:19:01.449 "trtype": "TCP", 00:19:01.449 "adrfam": "IPv4", 00:19:01.449 "traddr": "10.0.0.2", 00:19:01.449 "trsvcid": "4420" 00:19:01.449 }, 00:19:01.449 "peer_address": { 00:19:01.449 "trtype": "TCP", 00:19:01.449 "adrfam": "IPv4", 00:19:01.449 "traddr": "10.0.0.1", 00:19:01.449 "trsvcid": "55602" 00:19:01.449 }, 00:19:01.449 "auth": { 00:19:01.449 "state": "completed", 00:19:01.449 "digest": "sha384", 00:19:01.449 "dhgroup": "null" 00:19:01.449 } 00:19:01.449 } 00:19:01.449 ]' 00:19:01.449 23:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.707 23:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.707 23:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.707 23:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:01.707 23:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.707 23:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.707 23:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.707 23:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.965 23:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:19:02.905 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.905 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:02.905 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.905 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.905 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.905 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.905 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:02.905 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:03.164 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:03.164 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.164 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:03.164 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:03.164 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:03.164 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.164 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:03.164 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.164 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.164 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.164 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:03.164 23:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:03.423 00:19:03.423 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.423 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.423 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.682 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.682 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.682 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.682 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.682 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.682 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.682 { 00:19:03.682 "cntlid": 55, 00:19:03.682 "qid": 0, 00:19:03.682 "state": "enabled", 00:19:03.682 "thread": "nvmf_tgt_poll_group_000", 00:19:03.682 "listen_address": { 00:19:03.682 "trtype": "TCP", 00:19:03.682 "adrfam": "IPv4", 00:19:03.682 "traddr": "10.0.0.2", 00:19:03.682 "trsvcid": "4420" 00:19:03.682 }, 00:19:03.682 "peer_address": { 00:19:03.682 "trtype": "TCP", 00:19:03.682 "adrfam": "IPv4", 00:19:03.682 "traddr": "10.0.0.1", 00:19:03.682 "trsvcid": "48538" 00:19:03.682 }, 00:19:03.682 "auth": { 00:19:03.682 "state": "completed", 00:19:03.682 "digest": "sha384", 00:19:03.682 "dhgroup": "null" 00:19:03.682 } 00:19:03.682 } 00:19:03.682 ]' 00:19:03.682 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.941 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.941 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.941 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:03.941 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.941 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.941 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.941 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.200 23:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:19:05.138 23:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.138 23:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.138 23:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.138 23:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.138 23:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.396 23:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.396 23:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.396 23:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:05.396 23:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:05.655 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:05.655 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.655 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:05.655 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:05.655 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:05.655 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.655 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.655 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.655 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.655 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.655 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.656 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.912 00:19:05.912 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.912 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.913 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.169 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.169 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.169 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.169 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.169 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.169 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.169 { 00:19:06.169 "cntlid": 57, 00:19:06.169 "qid": 0, 00:19:06.169 "state": "enabled", 00:19:06.169 "thread": "nvmf_tgt_poll_group_000", 00:19:06.169 "listen_address": { 00:19:06.169 "trtype": "TCP", 00:19:06.169 "adrfam": "IPv4", 00:19:06.169 "traddr": "10.0.0.2", 00:19:06.169 "trsvcid": "4420" 00:19:06.169 }, 00:19:06.169 "peer_address": { 00:19:06.169 "trtype": "TCP", 00:19:06.169 "adrfam": "IPv4", 00:19:06.169 "traddr": "10.0.0.1", 00:19:06.169 "trsvcid": "48562" 00:19:06.169 }, 00:19:06.169 "auth": { 00:19:06.169 "state": "completed", 00:19:06.169 "digest": "sha384", 00:19:06.169 "dhgroup": "ffdhe2048" 00:19:06.169 } 00:19:06.169 } 00:19:06.169 ]' 00:19:06.169 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.169 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.169 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.169 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:06.169 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.427 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.427 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.427 23:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.684 23:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:19:07.621 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.621 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:07.621 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.621 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.621 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.621 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.621 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:07.621 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:07.880 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:07.880 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.880 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:07.880 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:07.880 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:07.880 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.880 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.880 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.880 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.880 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.880 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.880 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.138 00:19:08.138 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.138 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.138 23:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.396 23:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.396 23:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.396 23:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.396 23:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.396 23:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.396 23:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.396 { 00:19:08.396 "cntlid": 59, 00:19:08.396 "qid": 0, 00:19:08.396 "state": "enabled", 00:19:08.396 "thread": "nvmf_tgt_poll_group_000", 00:19:08.396 "listen_address": { 00:19:08.396 "trtype": "TCP", 00:19:08.396 "adrfam": "IPv4", 00:19:08.396 "traddr": "10.0.0.2", 00:19:08.396 "trsvcid": "4420" 00:19:08.396 }, 00:19:08.396 "peer_address": { 00:19:08.396 "trtype": "TCP", 00:19:08.396 "adrfam": "IPv4", 00:19:08.396 "traddr": "10.0.0.1", 00:19:08.396 "trsvcid": "48592" 00:19:08.397 }, 00:19:08.397 "auth": { 00:19:08.397 "state": "completed", 00:19:08.397 "digest": "sha384", 00:19:08.397 "dhgroup": "ffdhe2048" 00:19:08.397 } 00:19:08.397 } 00:19:08.397 ]' 00:19:08.397 23:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.397 23:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.397 23:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.397 23:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:08.397 23:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.657 23:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.657 23:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.657 23:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.916 23:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:19:09.852 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.852 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.852 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.852 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.852 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.852 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.852 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:09.852 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:10.111 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:10.111 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.111 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:10.111 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:10.111 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.111 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.111 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.111 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.111 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.111 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.111 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.111 23:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.369 00:19:10.369 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.369 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.369 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.627 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.627 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.627 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.627 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.627 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.627 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.627 { 00:19:10.627 "cntlid": 61, 00:19:10.627 "qid": 0, 00:19:10.627 "state": "enabled", 00:19:10.627 "thread": "nvmf_tgt_poll_group_000", 00:19:10.627 "listen_address": { 00:19:10.627 "trtype": "TCP", 00:19:10.627 "adrfam": "IPv4", 00:19:10.627 "traddr": "10.0.0.2", 00:19:10.627 "trsvcid": "4420" 00:19:10.627 }, 00:19:10.627 "peer_address": { 00:19:10.627 "trtype": "TCP", 00:19:10.627 "adrfam": "IPv4", 00:19:10.627 "traddr": "10.0.0.1", 00:19:10.627 "trsvcid": "48612" 00:19:10.627 }, 00:19:10.627 "auth": { 00:19:10.627 "state": "completed", 00:19:10.627 "digest": "sha384", 00:19:10.627 "dhgroup": "ffdhe2048" 00:19:10.627 } 00:19:10.627 } 00:19:10.627 ]' 00:19:10.627 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.627 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.627 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.885 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:10.885 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.885 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.885 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.885 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.143 23:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:19:12.079 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.079 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.079 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.079 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.079 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.079 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.079 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:12.079 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:12.337 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:12.337 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.337 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:12.337 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:12.337 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.337 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.337 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:12.337 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.337 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.337 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.337 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.337 23:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.903 00:19:12.903 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.903 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.903 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.903 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.903 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.903 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.903 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.903 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.903 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.903 { 00:19:12.903 "cntlid": 63, 00:19:12.903 "qid": 0, 00:19:12.903 "state": "enabled", 00:19:12.903 "thread": "nvmf_tgt_poll_group_000", 00:19:12.903 "listen_address": { 00:19:12.903 "trtype": "TCP", 00:19:12.903 "adrfam": "IPv4", 00:19:12.903 "traddr": "10.0.0.2", 00:19:12.903 "trsvcid": "4420" 00:19:12.903 }, 00:19:12.903 "peer_address": { 00:19:12.903 "trtype": "TCP", 00:19:12.903 "adrfam": "IPv4", 00:19:12.903 "traddr": "10.0.0.1", 00:19:12.903 "trsvcid": "59814" 00:19:12.903 }, 00:19:12.903 "auth": { 00:19:12.903 "state": "completed", 00:19:12.903 "digest": "sha384", 00:19:12.903 "dhgroup": "ffdhe2048" 00:19:12.903 } 00:19:12.903 } 00:19:12.903 ]' 00:19:12.903 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.161 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.161 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.161 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.161 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.161 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.161 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.161 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.419 23:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:19:14.355 23:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.355 23:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.355 23:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.355 23:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.355 23:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.355 23:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.355 23:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.355 23:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:14.355 23:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:14.613 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:14.613 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.613 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:14.613 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:14.613 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:14.613 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.613 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.613 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.613 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.613 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.613 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.613 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.871 00:19:14.871 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.871 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.871 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.129 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.129 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.129 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.129 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.129 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.129 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.129 { 00:19:15.129 "cntlid": 65, 00:19:15.129 "qid": 0, 00:19:15.129 "state": "enabled", 00:19:15.129 "thread": "nvmf_tgt_poll_group_000", 00:19:15.129 "listen_address": { 00:19:15.129 "trtype": "TCP", 00:19:15.129 "adrfam": "IPv4", 00:19:15.129 "traddr": "10.0.0.2", 00:19:15.129 "trsvcid": "4420" 00:19:15.129 }, 00:19:15.129 "peer_address": { 00:19:15.129 "trtype": "TCP", 00:19:15.129 "adrfam": "IPv4", 00:19:15.129 "traddr": "10.0.0.1", 00:19:15.129 "trsvcid": "59838" 00:19:15.129 }, 00:19:15.129 "auth": { 00:19:15.129 "state": "completed", 00:19:15.129 "digest": "sha384", 00:19:15.129 "dhgroup": "ffdhe3072" 00:19:15.129 } 00:19:15.129 } 00:19:15.129 ]' 00:19:15.129 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.387 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.387 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.387 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:15.387 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.387 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.387 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.387 23:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.645 23:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:19:16.608 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.608 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.608 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.608 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.608 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.608 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.608 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:16.608 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:16.869 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:16.869 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.869 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:16.869 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:16.869 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:16.869 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.869 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.869 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.869 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.869 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.869 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.869 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.436 00:19:17.436 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.436 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.436 23:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.436 23:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.436 23:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.436 23:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.437 23:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.437 23:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.437 23:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.437 { 00:19:17.437 "cntlid": 67, 00:19:17.437 "qid": 0, 00:19:17.437 "state": "enabled", 00:19:17.437 "thread": "nvmf_tgt_poll_group_000", 00:19:17.437 "listen_address": { 00:19:17.437 "trtype": "TCP", 00:19:17.437 "adrfam": "IPv4", 00:19:17.437 "traddr": "10.0.0.2", 00:19:17.437 "trsvcid": "4420" 00:19:17.437 }, 00:19:17.437 "peer_address": { 00:19:17.437 "trtype": "TCP", 00:19:17.437 "adrfam": "IPv4", 00:19:17.437 "traddr": "10.0.0.1", 00:19:17.437 "trsvcid": "59868" 00:19:17.437 }, 00:19:17.437 "auth": { 00:19:17.437 "state": "completed", 00:19:17.437 "digest": "sha384", 00:19:17.437 "dhgroup": "ffdhe3072" 00:19:17.437 } 00:19:17.437 } 00:19:17.437 ]' 00:19:17.437 23:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.695 23:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.695 23:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.695 23:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:17.695 23:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.695 23:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.695 23:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.695 23:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.953 23:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:19:18.889 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.889 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.889 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.889 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.889 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.889 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.889 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:18.889 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:19.147 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:19.147 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.147 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:19.147 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:19.147 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:19.147 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.147 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.147 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.147 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.147 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.147 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.147 23:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.407 00:19:19.668 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.668 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.668 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.927 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.927 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.927 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.927 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.927 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.927 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.927 { 00:19:19.927 "cntlid": 69, 00:19:19.927 "qid": 0, 00:19:19.927 "state": "enabled", 00:19:19.927 "thread": "nvmf_tgt_poll_group_000", 00:19:19.927 "listen_address": { 00:19:19.927 "trtype": "TCP", 00:19:19.927 "adrfam": "IPv4", 00:19:19.927 "traddr": "10.0.0.2", 00:19:19.927 "trsvcid": "4420" 00:19:19.927 }, 00:19:19.927 "peer_address": { 00:19:19.927 "trtype": "TCP", 00:19:19.927 "adrfam": "IPv4", 00:19:19.927 "traddr": "10.0.0.1", 00:19:19.927 "trsvcid": "59902" 00:19:19.927 }, 00:19:19.927 "auth": { 00:19:19.927 "state": "completed", 00:19:19.927 "digest": "sha384", 00:19:19.927 "dhgroup": "ffdhe3072" 00:19:19.927 } 00:19:19.927 } 00:19:19.927 ]' 00:19:19.927 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.927 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:19.927 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.927 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:19.927 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.927 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.927 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.927 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.186 23:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:19:21.124 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.124 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.124 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.124 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.124 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.124 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.124 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:21.124 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:21.383 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:21.383 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.383 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:21.383 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:21.383 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:21.383 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.383 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:21.383 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.383 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.383 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.383 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.383 23:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.641 00:19:21.641 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.641 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.641 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.900 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.900 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.900 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.900 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.900 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.900 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.900 { 00:19:21.900 "cntlid": 71, 00:19:21.900 "qid": 0, 00:19:21.900 "state": "enabled", 00:19:21.900 "thread": "nvmf_tgt_poll_group_000", 00:19:21.900 "listen_address": { 00:19:21.900 "trtype": "TCP", 00:19:21.900 "adrfam": "IPv4", 00:19:21.900 "traddr": "10.0.0.2", 00:19:21.900 "trsvcid": "4420" 00:19:21.900 }, 00:19:21.900 "peer_address": { 00:19:21.900 "trtype": "TCP", 00:19:21.900 "adrfam": "IPv4", 00:19:21.900 "traddr": "10.0.0.1", 00:19:21.900 "trsvcid": "33802" 00:19:21.900 }, 00:19:21.900 "auth": { 00:19:21.900 "state": "completed", 00:19:21.900 "digest": "sha384", 00:19:21.900 "dhgroup": "ffdhe3072" 00:19:21.900 } 00:19:21.900 } 00:19:21.900 ]' 00:19:21.900 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.158 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.158 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.158 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.158 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.158 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.158 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.158 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.416 23:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:19:23.354 23:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.355 23:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.355 23:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.355 23:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.355 23:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.355 23:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.355 23:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.355 23:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:23.355 23:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:23.613 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:23.613 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.613 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:23.613 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:23.613 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:23.613 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.613 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.613 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.613 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.613 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.613 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.613 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.182 00:19:24.182 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.182 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.182 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.440 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.440 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.440 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.440 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.440 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.440 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.440 { 00:19:24.440 "cntlid": 73, 00:19:24.440 "qid": 0, 00:19:24.440 "state": "enabled", 00:19:24.440 "thread": "nvmf_tgt_poll_group_000", 00:19:24.440 "listen_address": { 00:19:24.440 "trtype": "TCP", 00:19:24.440 "adrfam": "IPv4", 00:19:24.440 "traddr": "10.0.0.2", 00:19:24.440 "trsvcid": "4420" 00:19:24.440 }, 00:19:24.440 "peer_address": { 00:19:24.440 "trtype": "TCP", 00:19:24.440 "adrfam": "IPv4", 00:19:24.440 "traddr": "10.0.0.1", 00:19:24.440 "trsvcid": "33832" 00:19:24.440 }, 00:19:24.440 "auth": { 00:19:24.440 "state": "completed", 00:19:24.440 "digest": "sha384", 00:19:24.440 "dhgroup": "ffdhe4096" 00:19:24.440 } 00:19:24.440 } 00:19:24.440 ]' 00:19:24.441 23:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.441 23:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.441 23:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.441 23:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:24.441 23:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.441 23:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.441 23:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.441 23:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.699 23:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:19:25.636 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.636 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.636 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.636 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.636 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.636 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.637 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:25.637 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.205 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:26.205 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.205 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:26.205 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:26.205 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:26.205 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.205 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.205 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.205 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.205 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.205 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.205 23:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.463 00:19:26.463 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.463 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.463 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.720 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.720 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.720 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.720 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.720 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.720 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.720 { 00:19:26.720 "cntlid": 75, 00:19:26.720 "qid": 0, 00:19:26.720 "state": "enabled", 00:19:26.720 "thread": "nvmf_tgt_poll_group_000", 00:19:26.720 "listen_address": { 00:19:26.721 "trtype": "TCP", 00:19:26.721 "adrfam": "IPv4", 00:19:26.721 "traddr": "10.0.0.2", 00:19:26.721 "trsvcid": "4420" 00:19:26.721 }, 00:19:26.721 "peer_address": { 00:19:26.721 "trtype": "TCP", 00:19:26.721 "adrfam": "IPv4", 00:19:26.721 "traddr": "10.0.0.1", 00:19:26.721 "trsvcid": "33850" 00:19:26.721 }, 00:19:26.721 "auth": { 00:19:26.721 "state": "completed", 00:19:26.721 "digest": "sha384", 00:19:26.721 "dhgroup": "ffdhe4096" 00:19:26.721 } 00:19:26.721 } 00:19:26.721 ]' 00:19:26.721 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.721 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:26.721 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.721 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.721 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.721 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.721 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.721 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.978 23:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:19:27.916 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.916 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.916 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.916 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.916 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.916 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.916 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:27.916 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:28.482 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:28.482 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.482 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:28.483 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:28.483 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.483 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.483 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.483 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.483 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.483 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.483 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.483 23:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.743 00:19:28.743 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.743 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.743 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.002 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.002 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.002 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.002 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.002 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.002 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.002 { 00:19:29.002 "cntlid": 77, 00:19:29.002 "qid": 0, 00:19:29.002 "state": "enabled", 00:19:29.002 "thread": "nvmf_tgt_poll_group_000", 00:19:29.002 "listen_address": { 00:19:29.002 "trtype": "TCP", 00:19:29.002 "adrfam": "IPv4", 00:19:29.002 "traddr": "10.0.0.2", 00:19:29.002 "trsvcid": "4420" 00:19:29.002 }, 00:19:29.002 "peer_address": { 00:19:29.002 "trtype": "TCP", 00:19:29.002 "adrfam": "IPv4", 00:19:29.002 "traddr": "10.0.0.1", 00:19:29.002 "trsvcid": "33874" 00:19:29.002 }, 00:19:29.002 "auth": { 00:19:29.002 "state": "completed", 00:19:29.002 "digest": "sha384", 00:19:29.002 "dhgroup": "ffdhe4096" 00:19:29.002 } 00:19:29.002 } 00:19:29.002 ]' 00:19:29.002 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.002 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.002 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.002 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.002 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.002 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.002 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.002 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.262 23:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:19:30.199 23:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.199 23:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.199 23:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.199 23:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.457 23:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.457 23:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.458 23:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:30.458 23:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:30.716 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:30.716 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.716 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:30.716 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:30.716 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.716 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.716 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:30.716 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.716 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.716 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.716 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.716 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.975 00:19:30.975 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.975 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.975 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.232 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.232 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.232 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.232 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.232 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.232 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.232 { 00:19:31.232 "cntlid": 79, 00:19:31.232 "qid": 0, 00:19:31.232 "state": "enabled", 00:19:31.232 "thread": "nvmf_tgt_poll_group_000", 00:19:31.232 "listen_address": { 00:19:31.232 "trtype": "TCP", 00:19:31.232 "adrfam": "IPv4", 00:19:31.232 "traddr": "10.0.0.2", 00:19:31.232 "trsvcid": "4420" 00:19:31.232 }, 00:19:31.232 "peer_address": { 00:19:31.232 "trtype": "TCP", 00:19:31.232 "adrfam": "IPv4", 00:19:31.232 "traddr": "10.0.0.1", 00:19:31.232 "trsvcid": "33908" 00:19:31.232 }, 00:19:31.232 "auth": { 00:19:31.232 "state": "completed", 00:19:31.232 "digest": "sha384", 00:19:31.232 "dhgroup": "ffdhe4096" 00:19:31.232 } 00:19:31.232 } 00:19:31.232 ]' 00:19:31.232 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.489 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.489 23:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.489 23:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.489 23:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.489 23:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.489 23:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.489 23:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.750 23:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:19:32.688 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.688 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.688 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.688 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.688 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.688 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.688 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.688 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:32.688 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:32.946 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:32.946 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.946 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:32.946 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:32.946 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:32.946 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.946 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.946 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.946 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.946 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.946 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.946 23:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.514 00:19:33.514 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.514 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.514 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.772 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.772 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.772 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.772 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.772 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.772 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.772 { 00:19:33.772 "cntlid": 81, 00:19:33.772 "qid": 0, 00:19:33.772 "state": "enabled", 00:19:33.772 "thread": "nvmf_tgt_poll_group_000", 00:19:33.772 "listen_address": { 00:19:33.772 "trtype": "TCP", 00:19:33.772 "adrfam": "IPv4", 00:19:33.772 "traddr": "10.0.0.2", 00:19:33.772 "trsvcid": "4420" 00:19:33.772 }, 00:19:33.772 "peer_address": { 00:19:33.772 "trtype": "TCP", 00:19:33.772 "adrfam": "IPv4", 00:19:33.772 "traddr": "10.0.0.1", 00:19:33.772 "trsvcid": "41506" 00:19:33.772 }, 00:19:33.772 "auth": { 00:19:33.772 "state": "completed", 00:19:33.772 "digest": "sha384", 00:19:33.772 "dhgroup": "ffdhe6144" 00:19:33.772 } 00:19:33.772 } 00:19:33.772 ]' 00:19:33.772 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.772 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.772 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.772 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:33.772 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.034 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.034 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.034 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.323 23:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:19:35.261 23:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.261 23:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.261 23:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.261 23:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.261 23:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.261 23:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.261 23:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:35.261 23:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:35.519 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:35.519 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.519 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:35.519 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.519 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:35.519 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.519 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.519 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.519 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.519 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.519 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.519 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.094 00:19:36.094 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.094 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.094 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.351 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.351 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.351 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.351 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.351 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.351 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.351 { 00:19:36.351 "cntlid": 83, 00:19:36.351 "qid": 0, 00:19:36.351 "state": "enabled", 00:19:36.351 "thread": "nvmf_tgt_poll_group_000", 00:19:36.351 "listen_address": { 00:19:36.351 "trtype": "TCP", 00:19:36.351 "adrfam": "IPv4", 00:19:36.351 "traddr": "10.0.0.2", 00:19:36.351 "trsvcid": "4420" 00:19:36.351 }, 00:19:36.351 "peer_address": { 00:19:36.351 "trtype": "TCP", 00:19:36.351 "adrfam": "IPv4", 00:19:36.351 "traddr": "10.0.0.1", 00:19:36.351 "trsvcid": "41540" 00:19:36.351 }, 00:19:36.351 "auth": { 00:19:36.351 "state": "completed", 00:19:36.351 "digest": "sha384", 00:19:36.351 "dhgroup": "ffdhe6144" 00:19:36.351 } 00:19:36.351 } 00:19:36.351 ]' 00:19:36.351 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.351 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.351 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.351 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.351 23:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.351 23:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.351 23:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.351 23:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.608 23:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:19:37.543 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.543 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.543 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.543 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.543 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.543 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.543 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:37.543 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:37.801 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:37.801 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.802 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:37.802 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:37.802 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:37.802 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.802 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.802 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.802 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.802 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.802 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.802 23:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.370 00:19:38.370 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.370 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.370 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.628 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.628 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.628 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.628 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.628 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.628 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.628 { 00:19:38.628 "cntlid": 85, 00:19:38.628 "qid": 0, 00:19:38.628 "state": "enabled", 00:19:38.628 "thread": "nvmf_tgt_poll_group_000", 00:19:38.628 "listen_address": { 00:19:38.629 "trtype": "TCP", 00:19:38.629 "adrfam": "IPv4", 00:19:38.629 "traddr": "10.0.0.2", 00:19:38.629 "trsvcid": "4420" 00:19:38.629 }, 00:19:38.629 "peer_address": { 00:19:38.629 "trtype": "TCP", 00:19:38.629 "adrfam": "IPv4", 00:19:38.629 "traddr": "10.0.0.1", 00:19:38.629 "trsvcid": "41572" 00:19:38.629 }, 00:19:38.629 "auth": { 00:19:38.629 "state": "completed", 00:19:38.629 "digest": "sha384", 00:19:38.629 "dhgroup": "ffdhe6144" 00:19:38.629 } 00:19:38.629 } 00:19:38.629 ]' 00:19:38.629 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.887 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.887 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.887 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.887 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.887 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.887 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.887 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.144 23:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:19:40.076 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.076 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.076 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.076 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.076 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.076 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.076 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:40.076 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:40.334 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:40.334 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.334 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:40.334 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:40.334 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:40.334 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.334 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:40.334 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.334 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.334 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.334 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.334 23:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.901 00:19:40.901 23:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.901 23:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.901 23:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.159 23:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.159 23:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.159 23:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.159 23:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.159 23:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.159 23:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.159 { 00:19:41.159 "cntlid": 87, 00:19:41.159 "qid": 0, 00:19:41.159 "state": "enabled", 00:19:41.159 "thread": "nvmf_tgt_poll_group_000", 00:19:41.159 "listen_address": { 00:19:41.159 "trtype": "TCP", 00:19:41.159 "adrfam": "IPv4", 00:19:41.159 "traddr": "10.0.0.2", 00:19:41.159 "trsvcid": "4420" 00:19:41.159 }, 00:19:41.159 "peer_address": { 00:19:41.159 "trtype": "TCP", 00:19:41.159 "adrfam": "IPv4", 00:19:41.159 "traddr": "10.0.0.1", 00:19:41.159 "trsvcid": "41602" 00:19:41.159 }, 00:19:41.159 "auth": { 00:19:41.159 "state": "completed", 00:19:41.159 "digest": "sha384", 00:19:41.159 "dhgroup": "ffdhe6144" 00:19:41.159 } 00:19:41.159 } 00:19:41.159 ]' 00:19:41.159 23:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.159 23:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.159 23:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.159 23:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:41.159 23:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.159 23:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.159 23:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.159 23:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.418 23:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.791 23:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.729 00:19:43.729 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.729 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.729 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.987 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.987 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.987 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.987 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.987 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.987 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.987 { 00:19:43.987 "cntlid": 89, 00:19:43.987 "qid": 0, 00:19:43.987 "state": "enabled", 00:19:43.987 "thread": "nvmf_tgt_poll_group_000", 00:19:43.987 "listen_address": { 00:19:43.987 "trtype": "TCP", 00:19:43.987 "adrfam": "IPv4", 00:19:43.987 "traddr": "10.0.0.2", 00:19:43.987 "trsvcid": "4420" 00:19:43.987 }, 00:19:43.987 "peer_address": { 00:19:43.987 "trtype": "TCP", 00:19:43.987 "adrfam": "IPv4", 00:19:43.987 "traddr": "10.0.0.1", 00:19:43.987 "trsvcid": "40054" 00:19:43.987 }, 00:19:43.987 "auth": { 00:19:43.987 "state": "completed", 00:19:43.987 "digest": "sha384", 00:19:43.987 "dhgroup": "ffdhe8192" 00:19:43.987 } 00:19:43.987 } 00:19:43.987 ]' 00:19:43.987 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.987 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.987 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.987 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.987 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.987 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.987 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.987 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.247 23:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:19:45.185 23:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.185 23:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.185 23:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.185 23:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.185 23:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.185 23:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.185 23:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:45.185 23:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:45.442 23:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:45.442 23:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.442 23:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:45.442 23:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:45.442 23:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:45.442 23:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.442 23:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.442 23:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.442 23:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.442 23:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.443 23:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.443 23:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.381 00:19:46.381 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.381 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.381 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.638 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.638 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.639 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.639 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.639 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.639 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.639 { 00:19:46.639 "cntlid": 91, 00:19:46.639 "qid": 0, 00:19:46.639 "state": "enabled", 00:19:46.639 "thread": "nvmf_tgt_poll_group_000", 00:19:46.639 "listen_address": { 00:19:46.639 "trtype": "TCP", 00:19:46.639 "adrfam": "IPv4", 00:19:46.639 "traddr": "10.0.0.2", 00:19:46.639 "trsvcid": "4420" 00:19:46.639 }, 00:19:46.639 "peer_address": { 00:19:46.639 "trtype": "TCP", 00:19:46.639 "adrfam": "IPv4", 00:19:46.639 "traddr": "10.0.0.1", 00:19:46.639 "trsvcid": "40084" 00:19:46.639 }, 00:19:46.639 "auth": { 00:19:46.639 "state": "completed", 00:19:46.639 "digest": "sha384", 00:19:46.639 "dhgroup": "ffdhe8192" 00:19:46.639 } 00:19:46.639 } 00:19:46.639 ]' 00:19:46.639 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.896 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.896 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.896 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.896 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.896 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.896 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.896 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.156 23:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:19:48.092 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.092 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.092 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.092 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.092 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.092 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.092 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:48.092 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:48.350 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:48.350 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.350 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.350 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:48.350 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:48.350 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.350 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.350 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.350 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.350 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.350 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.350 23:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.283 00:19:49.283 23:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.283 23:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.283 23:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.540 23:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.540 23:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.540 23:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.540 23:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.540 23:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.540 23:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.540 { 00:19:49.540 "cntlid": 93, 00:19:49.540 "qid": 0, 00:19:49.540 "state": "enabled", 00:19:49.540 "thread": "nvmf_tgt_poll_group_000", 00:19:49.540 "listen_address": { 00:19:49.540 "trtype": "TCP", 00:19:49.540 "adrfam": "IPv4", 00:19:49.540 "traddr": "10.0.0.2", 00:19:49.540 "trsvcid": "4420" 00:19:49.540 }, 00:19:49.540 "peer_address": { 00:19:49.540 "trtype": "TCP", 00:19:49.540 "adrfam": "IPv4", 00:19:49.540 "traddr": "10.0.0.1", 00:19:49.540 "trsvcid": "40120" 00:19:49.540 }, 00:19:49.540 "auth": { 00:19:49.540 "state": "completed", 00:19:49.540 "digest": "sha384", 00:19:49.540 "dhgroup": "ffdhe8192" 00:19:49.540 } 00:19:49.540 } 00:19:49.540 ]' 00:19:49.540 23:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.540 23:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.540 23:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.540 23:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.540 23:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.540 23:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.540 23:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.540 23:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.798 23:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:19:50.732 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.732 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.732 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.732 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.732 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.732 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.732 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:50.732 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:50.991 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:50.992 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.992 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:50.992 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:50.992 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:50.992 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.992 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:50.992 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.992 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.992 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.992 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.992 23:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.928 00:19:51.928 23:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.928 23:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.928 23:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.186 23:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.186 23:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.186 23:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.186 23:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.186 23:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.186 23:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.186 { 00:19:52.186 "cntlid": 95, 00:19:52.186 "qid": 0, 00:19:52.186 "state": "enabled", 00:19:52.186 "thread": "nvmf_tgt_poll_group_000", 00:19:52.186 "listen_address": { 00:19:52.186 "trtype": "TCP", 00:19:52.186 "adrfam": "IPv4", 00:19:52.186 "traddr": "10.0.0.2", 00:19:52.186 "trsvcid": "4420" 00:19:52.186 }, 00:19:52.186 "peer_address": { 00:19:52.186 "trtype": "TCP", 00:19:52.186 "adrfam": "IPv4", 00:19:52.186 "traddr": "10.0.0.1", 00:19:52.186 "trsvcid": "40134" 00:19:52.186 }, 00:19:52.186 "auth": { 00:19:52.186 "state": "completed", 00:19:52.186 "digest": "sha384", 00:19:52.186 "dhgroup": "ffdhe8192" 00:19:52.186 } 00:19:52.186 } 00:19:52.186 ]' 00:19:52.186 23:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.186 23:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.186 23:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.186 23:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:52.186 23:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.186 23:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.186 23:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.186 23:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.473 23:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.853 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.111 00:19:54.111 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.111 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.111 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.369 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.369 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.369 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.369 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.369 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.369 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.369 { 00:19:54.369 "cntlid": 97, 00:19:54.369 "qid": 0, 00:19:54.369 "state": "enabled", 00:19:54.369 "thread": "nvmf_tgt_poll_group_000", 00:19:54.369 "listen_address": { 00:19:54.369 "trtype": "TCP", 00:19:54.369 "adrfam": "IPv4", 00:19:54.369 "traddr": "10.0.0.2", 00:19:54.369 "trsvcid": "4420" 00:19:54.369 }, 00:19:54.369 "peer_address": { 00:19:54.370 "trtype": "TCP", 00:19:54.370 "adrfam": "IPv4", 00:19:54.370 "traddr": "10.0.0.1", 00:19:54.370 "trsvcid": "34500" 00:19:54.370 }, 00:19:54.370 "auth": { 00:19:54.370 "state": "completed", 00:19:54.370 "digest": "sha512", 00:19:54.370 "dhgroup": "null" 00:19:54.370 } 00:19:54.370 } 00:19:54.370 ]' 00:19:54.370 23:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.370 23:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.370 23:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.370 23:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:54.370 23:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.629 23:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.629 23:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.629 23:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.629 23:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.005 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.262 00:19:56.262 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.262 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.262 23:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.520 23:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.520 23:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.520 23:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.520 23:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.520 23:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.520 23:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.520 { 00:19:56.520 "cntlid": 99, 00:19:56.520 "qid": 0, 00:19:56.520 "state": "enabled", 00:19:56.520 "thread": "nvmf_tgt_poll_group_000", 00:19:56.520 "listen_address": { 00:19:56.520 "trtype": "TCP", 00:19:56.520 "adrfam": "IPv4", 00:19:56.520 "traddr": "10.0.0.2", 00:19:56.520 "trsvcid": "4420" 00:19:56.520 }, 00:19:56.520 "peer_address": { 00:19:56.520 "trtype": "TCP", 00:19:56.520 "adrfam": "IPv4", 00:19:56.520 "traddr": "10.0.0.1", 00:19:56.520 "trsvcid": "34530" 00:19:56.520 }, 00:19:56.520 "auth": { 00:19:56.520 "state": "completed", 00:19:56.520 "digest": "sha512", 00:19:56.520 "dhgroup": "null" 00:19:56.520 } 00:19:56.520 } 00:19:56.520 ]' 00:19:56.520 23:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.520 23:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.520 23:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.520 23:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:56.520 23:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.778 23:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.778 23:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.778 23:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.038 23:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:19:57.975 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.975 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.975 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.975 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.975 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.975 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.975 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:57.975 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:58.233 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:58.233 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.233 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:58.233 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:58.233 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:58.233 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.233 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.233 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.233 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.233 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.233 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.233 23:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.492 00:19:58.492 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.492 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.492 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.750 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.750 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.750 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.750 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.750 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.750 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.750 { 00:19:58.750 "cntlid": 101, 00:19:58.750 "qid": 0, 00:19:58.750 "state": "enabled", 00:19:58.750 "thread": "nvmf_tgt_poll_group_000", 00:19:58.750 "listen_address": { 00:19:58.750 "trtype": "TCP", 00:19:58.750 "adrfam": "IPv4", 00:19:58.750 "traddr": "10.0.0.2", 00:19:58.750 "trsvcid": "4420" 00:19:58.750 }, 00:19:58.750 "peer_address": { 00:19:58.750 "trtype": "TCP", 00:19:58.750 "adrfam": "IPv4", 00:19:58.750 "traddr": "10.0.0.1", 00:19:58.750 "trsvcid": "34564" 00:19:58.750 }, 00:19:58.750 "auth": { 00:19:58.750 "state": "completed", 00:19:58.750 "digest": "sha512", 00:19:58.750 "dhgroup": "null" 00:19:58.750 } 00:19:58.750 } 00:19:58.750 ]' 00:19:58.751 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.751 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.751 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.751 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:58.751 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.751 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.751 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.751 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.009 23:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:19:59.945 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.945 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.945 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.945 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.945 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.945 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.945 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:59.945 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:00.203 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:00.203 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.203 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:00.203 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:00.203 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:00.203 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.203 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:00.203 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.203 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.203 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.203 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.203 23:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.463 00:20:00.721 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.722 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.722 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.722 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.722 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.722 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.722 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.980 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.980 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.980 { 00:20:00.980 "cntlid": 103, 00:20:00.980 "qid": 0, 00:20:00.980 "state": "enabled", 00:20:00.980 "thread": "nvmf_tgt_poll_group_000", 00:20:00.980 "listen_address": { 00:20:00.980 "trtype": "TCP", 00:20:00.980 "adrfam": "IPv4", 00:20:00.980 "traddr": "10.0.0.2", 00:20:00.980 "trsvcid": "4420" 00:20:00.980 }, 00:20:00.980 "peer_address": { 00:20:00.980 "trtype": "TCP", 00:20:00.980 "adrfam": "IPv4", 00:20:00.980 "traddr": "10.0.0.1", 00:20:00.980 "trsvcid": "34592" 00:20:00.980 }, 00:20:00.980 "auth": { 00:20:00.980 "state": "completed", 00:20:00.980 "digest": "sha512", 00:20:00.980 "dhgroup": "null" 00:20:00.980 } 00:20:00.980 } 00:20:00.980 ]' 00:20:00.980 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.980 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.980 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.980 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:00.980 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.980 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.980 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.980 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.238 23:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:20:02.171 23:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.171 23:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.171 23:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.171 23:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.171 23:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.171 23:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.171 23:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.171 23:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:02.172 23:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:02.429 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:02.429 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.429 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:02.429 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:02.429 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:02.429 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.429 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.429 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.429 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.429 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.430 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.430 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.687 00:20:02.687 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.687 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.687 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.945 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.945 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.945 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.945 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.945 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.945 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.945 { 00:20:02.945 "cntlid": 105, 00:20:02.945 "qid": 0, 00:20:02.945 "state": "enabled", 00:20:02.945 "thread": "nvmf_tgt_poll_group_000", 00:20:02.945 "listen_address": { 00:20:02.945 "trtype": "TCP", 00:20:02.945 "adrfam": "IPv4", 00:20:02.945 "traddr": "10.0.0.2", 00:20:02.945 "trsvcid": "4420" 00:20:02.945 }, 00:20:02.945 "peer_address": { 00:20:02.945 "trtype": "TCP", 00:20:02.945 "adrfam": "IPv4", 00:20:02.945 "traddr": "10.0.0.1", 00:20:02.946 "trsvcid": "47486" 00:20:02.946 }, 00:20:02.946 "auth": { 00:20:02.946 "state": "completed", 00:20:02.946 "digest": "sha512", 00:20:02.946 "dhgroup": "ffdhe2048" 00:20:02.946 } 00:20:02.946 } 00:20:02.946 ]' 00:20:02.946 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.203 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.203 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.203 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.203 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.203 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.203 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.203 23:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.461 23:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:20:04.397 23:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.397 23:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.397 23:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.397 23:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.397 23:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.397 23:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.397 23:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:04.397 23:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:04.655 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:04.655 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.655 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:04.655 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:04.655 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:04.655 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.655 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.655 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.655 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.655 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.655 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.655 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.913 00:20:04.913 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.913 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.913 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.171 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.171 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.171 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.171 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.171 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.171 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.171 { 00:20:05.171 "cntlid": 107, 00:20:05.171 "qid": 0, 00:20:05.171 "state": "enabled", 00:20:05.171 "thread": "nvmf_tgt_poll_group_000", 00:20:05.171 "listen_address": { 00:20:05.171 "trtype": "TCP", 00:20:05.171 "adrfam": "IPv4", 00:20:05.171 "traddr": "10.0.0.2", 00:20:05.171 "trsvcid": "4420" 00:20:05.171 }, 00:20:05.171 "peer_address": { 00:20:05.171 "trtype": "TCP", 00:20:05.171 "adrfam": "IPv4", 00:20:05.171 "traddr": "10.0.0.1", 00:20:05.171 "trsvcid": "47520" 00:20:05.171 }, 00:20:05.171 "auth": { 00:20:05.171 "state": "completed", 00:20:05.171 "digest": "sha512", 00:20:05.171 "dhgroup": "ffdhe2048" 00:20:05.171 } 00:20:05.171 } 00:20:05.171 ]' 00:20:05.171 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.171 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.171 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.171 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:05.171 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.429 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.429 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.429 23:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.687 23:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:20:06.621 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.621 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.621 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.621 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.621 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.621 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.621 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:06.621 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:06.879 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:06.879 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.879 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:06.879 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:06.879 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:06.879 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.879 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.879 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.879 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.879 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.879 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.879 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.138 00:20:07.138 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.138 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.138 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.396 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.396 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.396 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.396 23:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.396 23:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.396 23:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.396 { 00:20:07.396 "cntlid": 109, 00:20:07.396 "qid": 0, 00:20:07.396 "state": "enabled", 00:20:07.396 "thread": "nvmf_tgt_poll_group_000", 00:20:07.396 "listen_address": { 00:20:07.396 "trtype": "TCP", 00:20:07.396 "adrfam": "IPv4", 00:20:07.396 "traddr": "10.0.0.2", 00:20:07.396 "trsvcid": "4420" 00:20:07.396 }, 00:20:07.396 "peer_address": { 00:20:07.396 "trtype": "TCP", 00:20:07.396 "adrfam": "IPv4", 00:20:07.396 "traddr": "10.0.0.1", 00:20:07.396 "trsvcid": "47556" 00:20:07.396 }, 00:20:07.396 "auth": { 00:20:07.396 "state": "completed", 00:20:07.396 "digest": "sha512", 00:20:07.396 "dhgroup": "ffdhe2048" 00:20:07.396 } 00:20:07.396 } 00:20:07.396 ]' 00:20:07.396 23:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.396 23:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.396 23:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.396 23:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:07.396 23:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.654 23:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.654 23:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.654 23:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.654 23:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.065 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.323 00:20:09.323 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.323 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.323 23:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.581 23:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.581 23:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.581 23:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.581 23:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.581 23:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.581 23:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.581 { 00:20:09.581 "cntlid": 111, 00:20:09.581 "qid": 0, 00:20:09.581 "state": "enabled", 00:20:09.581 "thread": "nvmf_tgt_poll_group_000", 00:20:09.581 "listen_address": { 00:20:09.581 "trtype": "TCP", 00:20:09.581 "adrfam": "IPv4", 00:20:09.581 "traddr": "10.0.0.2", 00:20:09.581 "trsvcid": "4420" 00:20:09.581 }, 00:20:09.581 "peer_address": { 00:20:09.581 "trtype": "TCP", 00:20:09.581 "adrfam": "IPv4", 00:20:09.581 "traddr": "10.0.0.1", 00:20:09.581 "trsvcid": "47580" 00:20:09.581 }, 00:20:09.581 "auth": { 00:20:09.581 "state": "completed", 00:20:09.581 "digest": "sha512", 00:20:09.581 "dhgroup": "ffdhe2048" 00:20:09.581 } 00:20:09.581 } 00:20:09.581 ]' 00:20:09.581 23:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.581 23:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.581 23:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.581 23:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:09.581 23:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.839 23:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.839 23:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.839 23:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.097 23:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:20:11.031 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.031 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.031 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.031 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.031 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.031 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.031 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.031 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.031 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.289 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:11.289 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.289 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:11.289 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:11.289 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:11.289 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.289 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.289 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.289 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.289 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.289 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.289 23:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.547 00:20:11.547 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.547 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.548 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.806 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.806 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.806 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.806 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.806 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.806 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.806 { 00:20:11.806 "cntlid": 113, 00:20:11.806 "qid": 0, 00:20:11.806 "state": "enabled", 00:20:11.806 "thread": "nvmf_tgt_poll_group_000", 00:20:11.806 "listen_address": { 00:20:11.806 "trtype": "TCP", 00:20:11.806 "adrfam": "IPv4", 00:20:11.806 "traddr": "10.0.0.2", 00:20:11.806 "trsvcid": "4420" 00:20:11.806 }, 00:20:11.806 "peer_address": { 00:20:11.806 "trtype": "TCP", 00:20:11.806 "adrfam": "IPv4", 00:20:11.806 "traddr": "10.0.0.1", 00:20:11.806 "trsvcid": "47604" 00:20:11.806 }, 00:20:11.806 "auth": { 00:20:11.806 "state": "completed", 00:20:11.806 "digest": "sha512", 00:20:11.806 "dhgroup": "ffdhe3072" 00:20:11.806 } 00:20:11.806 } 00:20:11.806 ]' 00:20:11.806 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.806 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:11.806 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.806 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:11.806 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.806 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.806 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.806 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.064 23:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:20:12.999 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.999 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.999 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.999 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.999 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.999 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.999 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:12.999 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:13.257 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:13.257 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.257 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:13.257 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:13.257 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:13.257 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.257 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.257 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.257 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.257 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.257 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.257 23:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.823 00:20:13.823 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.823 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.823 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.081 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.081 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.081 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.081 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.081 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.081 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.081 { 00:20:14.081 "cntlid": 115, 00:20:14.081 "qid": 0, 00:20:14.081 "state": "enabled", 00:20:14.081 "thread": "nvmf_tgt_poll_group_000", 00:20:14.081 "listen_address": { 00:20:14.081 "trtype": "TCP", 00:20:14.081 "adrfam": "IPv4", 00:20:14.081 "traddr": "10.0.0.2", 00:20:14.081 "trsvcid": "4420" 00:20:14.081 }, 00:20:14.081 "peer_address": { 00:20:14.081 "trtype": "TCP", 00:20:14.081 "adrfam": "IPv4", 00:20:14.081 "traddr": "10.0.0.1", 00:20:14.081 "trsvcid": "53908" 00:20:14.081 }, 00:20:14.081 "auth": { 00:20:14.081 "state": "completed", 00:20:14.081 "digest": "sha512", 00:20:14.081 "dhgroup": "ffdhe3072" 00:20:14.081 } 00:20:14.081 } 00:20:14.081 ]' 00:20:14.081 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.081 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.081 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.081 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:14.081 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.081 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.081 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.081 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.339 23:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:20:15.273 23:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.273 23:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.273 23:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.273 23:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.273 23:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.273 23:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.273 23:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:15.273 23:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:15.531 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:15.531 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.531 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:15.531 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:15.531 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:15.531 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.531 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.531 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.531 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.531 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.531 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.531 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.790 00:20:16.047 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.047 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.047 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.304 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.304 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.304 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.304 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.304 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.304 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.304 { 00:20:16.304 "cntlid": 117, 00:20:16.304 "qid": 0, 00:20:16.304 "state": "enabled", 00:20:16.304 "thread": "nvmf_tgt_poll_group_000", 00:20:16.304 "listen_address": { 00:20:16.304 "trtype": "TCP", 00:20:16.304 "adrfam": "IPv4", 00:20:16.304 "traddr": "10.0.0.2", 00:20:16.304 "trsvcid": "4420" 00:20:16.304 }, 00:20:16.304 "peer_address": { 00:20:16.304 "trtype": "TCP", 00:20:16.304 "adrfam": "IPv4", 00:20:16.304 "traddr": "10.0.0.1", 00:20:16.304 "trsvcid": "53930" 00:20:16.304 }, 00:20:16.304 "auth": { 00:20:16.304 "state": "completed", 00:20:16.304 "digest": "sha512", 00:20:16.304 "dhgroup": "ffdhe3072" 00:20:16.304 } 00:20:16.304 } 00:20:16.304 ]' 00:20:16.304 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.304 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.304 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.304 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:16.304 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.304 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.304 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.304 23:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.561 23:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:20:17.494 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.494 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.494 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.494 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.494 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.494 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.494 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:17.494 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:17.751 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:17.751 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.751 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:17.751 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:17.751 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:17.751 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.751 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:17.751 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.751 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.751 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.751 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:17.751 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:18.315 00:20:18.315 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.315 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.315 23:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.315 23:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.315 23:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.315 23:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.315 23:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.315 23:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.315 23:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.315 { 00:20:18.315 "cntlid": 119, 00:20:18.315 "qid": 0, 00:20:18.315 "state": "enabled", 00:20:18.315 "thread": "nvmf_tgt_poll_group_000", 00:20:18.315 "listen_address": { 00:20:18.315 "trtype": "TCP", 00:20:18.315 "adrfam": "IPv4", 00:20:18.315 "traddr": "10.0.0.2", 00:20:18.315 "trsvcid": "4420" 00:20:18.315 }, 00:20:18.315 "peer_address": { 00:20:18.315 "trtype": "TCP", 00:20:18.315 "adrfam": "IPv4", 00:20:18.315 "traddr": "10.0.0.1", 00:20:18.315 "trsvcid": "53968" 00:20:18.315 }, 00:20:18.315 "auth": { 00:20:18.315 "state": "completed", 00:20:18.315 "digest": "sha512", 00:20:18.315 "dhgroup": "ffdhe3072" 00:20:18.315 } 00:20:18.315 } 00:20:18.315 ]' 00:20:18.315 23:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.573 23:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.573 23:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.573 23:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:18.573 23:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.573 23:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.573 23:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.573 23:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.830 23:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:20:19.763 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.763 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.763 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.763 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.763 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.763 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.763 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.763 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:19.763 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:20.020 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:20.020 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.020 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:20.020 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:20.020 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:20.020 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.020 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.020 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.020 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.020 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.021 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.021 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.277 00:20:20.277 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.277 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.277 23:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.534 23:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.534 23:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.535 23:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.535 23:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.535 23:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.535 23:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.535 { 00:20:20.535 "cntlid": 121, 00:20:20.535 "qid": 0, 00:20:20.535 "state": "enabled", 00:20:20.535 "thread": "nvmf_tgt_poll_group_000", 00:20:20.535 "listen_address": { 00:20:20.535 "trtype": "TCP", 00:20:20.535 "adrfam": "IPv4", 00:20:20.535 "traddr": "10.0.0.2", 00:20:20.535 "trsvcid": "4420" 00:20:20.535 }, 00:20:20.535 "peer_address": { 00:20:20.535 "trtype": "TCP", 00:20:20.535 "adrfam": "IPv4", 00:20:20.535 "traddr": "10.0.0.1", 00:20:20.535 "trsvcid": "54008" 00:20:20.535 }, 00:20:20.535 "auth": { 00:20:20.535 "state": "completed", 00:20:20.535 "digest": "sha512", 00:20:20.535 "dhgroup": "ffdhe4096" 00:20:20.535 } 00:20:20.535 } 00:20:20.535 ]' 00:20:20.535 23:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.535 23:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:20.792 23:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.792 23:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:20.792 23:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.792 23:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.792 23:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.792 23:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.050 23:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:20:21.982 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.982 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.982 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.982 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.982 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.982 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.982 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:21.982 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:22.239 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:22.239 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.239 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:22.239 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:22.239 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:22.239 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.239 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.239 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.239 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.239 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.239 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.239 23:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.497 00:20:22.497 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.497 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.497 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.755 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.755 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.755 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.755 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.013 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.013 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.013 { 00:20:23.013 "cntlid": 123, 00:20:23.013 "qid": 0, 00:20:23.013 "state": "enabled", 00:20:23.013 "thread": "nvmf_tgt_poll_group_000", 00:20:23.013 "listen_address": { 00:20:23.013 "trtype": "TCP", 00:20:23.013 "adrfam": "IPv4", 00:20:23.013 "traddr": "10.0.0.2", 00:20:23.013 "trsvcid": "4420" 00:20:23.013 }, 00:20:23.013 "peer_address": { 00:20:23.013 "trtype": "TCP", 00:20:23.013 "adrfam": "IPv4", 00:20:23.013 "traddr": "10.0.0.1", 00:20:23.013 "trsvcid": "58096" 00:20:23.013 }, 00:20:23.013 "auth": { 00:20:23.013 "state": "completed", 00:20:23.013 "digest": "sha512", 00:20:23.013 "dhgroup": "ffdhe4096" 00:20:23.013 } 00:20:23.013 } 00:20:23.013 ]' 00:20:23.013 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.013 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.013 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.013 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:23.013 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.013 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.013 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.013 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.271 23:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:20:24.204 23:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.204 23:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.204 23:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.204 23:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.204 23:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.204 23:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.204 23:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:24.204 23:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:24.462 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:24.462 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.462 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:24.462 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:24.462 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:24.462 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.462 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.462 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.462 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.462 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.462 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.462 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.720 00:20:24.720 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.720 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.720 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.321 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.321 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.321 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.321 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.321 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.321 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.321 { 00:20:25.321 "cntlid": 125, 00:20:25.321 "qid": 0, 00:20:25.321 "state": "enabled", 00:20:25.321 "thread": "nvmf_tgt_poll_group_000", 00:20:25.321 "listen_address": { 00:20:25.321 "trtype": "TCP", 00:20:25.321 "adrfam": "IPv4", 00:20:25.321 "traddr": "10.0.0.2", 00:20:25.321 "trsvcid": "4420" 00:20:25.321 }, 00:20:25.321 "peer_address": { 00:20:25.321 "trtype": "TCP", 00:20:25.321 "adrfam": "IPv4", 00:20:25.321 "traddr": "10.0.0.1", 00:20:25.321 "trsvcid": "58128" 00:20:25.321 }, 00:20:25.321 "auth": { 00:20:25.321 "state": "completed", 00:20:25.321 "digest": "sha512", 00:20:25.321 "dhgroup": "ffdhe4096" 00:20:25.321 } 00:20:25.321 } 00:20:25.321 ]' 00:20:25.321 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.321 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.321 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.321 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:25.321 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.321 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.321 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.321 23:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.579 23:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:20:26.510 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.510 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.510 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.510 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.510 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.510 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.510 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:26.510 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:26.768 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:26.768 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.768 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:26.768 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:26.768 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:26.768 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.768 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:26.768 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.768 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.768 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.769 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.769 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.026 00:20:27.026 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.026 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.026 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.284 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.284 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.284 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.284 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.284 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.284 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.284 { 00:20:27.284 "cntlid": 127, 00:20:27.284 "qid": 0, 00:20:27.284 "state": "enabled", 00:20:27.284 "thread": "nvmf_tgt_poll_group_000", 00:20:27.284 "listen_address": { 00:20:27.284 "trtype": "TCP", 00:20:27.284 "adrfam": "IPv4", 00:20:27.284 "traddr": "10.0.0.2", 00:20:27.284 "trsvcid": "4420" 00:20:27.284 }, 00:20:27.284 "peer_address": { 00:20:27.284 "trtype": "TCP", 00:20:27.284 "adrfam": "IPv4", 00:20:27.284 "traddr": "10.0.0.1", 00:20:27.284 "trsvcid": "58164" 00:20:27.284 }, 00:20:27.284 "auth": { 00:20:27.284 "state": "completed", 00:20:27.284 "digest": "sha512", 00:20:27.284 "dhgroup": "ffdhe4096" 00:20:27.284 } 00:20:27.284 } 00:20:27.284 ]' 00:20:27.284 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.284 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.284 23:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.542 23:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:27.542 23:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.542 23:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.542 23:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.542 23:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.800 23:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:20:28.732 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.732 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.732 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.732 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.732 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.732 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.732 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.732 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:28.732 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:28.989 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:28.989 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.989 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:28.989 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:28.989 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:28.989 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.989 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.989 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.989 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.989 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.990 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.990 23:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.553 00:20:29.553 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.553 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.553 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.810 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.810 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.810 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.810 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.810 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.810 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.810 { 00:20:29.810 "cntlid": 129, 00:20:29.810 "qid": 0, 00:20:29.810 "state": "enabled", 00:20:29.810 "thread": "nvmf_tgt_poll_group_000", 00:20:29.810 "listen_address": { 00:20:29.810 "trtype": "TCP", 00:20:29.810 "adrfam": "IPv4", 00:20:29.810 "traddr": "10.0.0.2", 00:20:29.810 "trsvcid": "4420" 00:20:29.810 }, 00:20:29.810 "peer_address": { 00:20:29.810 "trtype": "TCP", 00:20:29.810 "adrfam": "IPv4", 00:20:29.810 "traddr": "10.0.0.1", 00:20:29.810 "trsvcid": "58208" 00:20:29.810 }, 00:20:29.810 "auth": { 00:20:29.810 "state": "completed", 00:20:29.810 "digest": "sha512", 00:20:29.810 "dhgroup": "ffdhe6144" 00:20:29.810 } 00:20:29.810 } 00:20:29.810 ]' 00:20:29.811 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.811 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.811 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.811 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.811 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.811 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.811 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.811 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.068 23:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:20:31.000 23:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.000 23:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.000 23:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.000 23:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.258 23:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.258 23:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.258 23:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:31.258 23:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:31.515 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:31.515 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.515 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:31.515 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:31.515 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:31.515 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.515 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.515 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.515 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.515 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.515 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.515 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.080 00:20:32.080 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.080 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.080 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.339 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.339 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.339 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.339 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.339 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.339 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.339 { 00:20:32.339 "cntlid": 131, 00:20:32.339 "qid": 0, 00:20:32.339 "state": "enabled", 00:20:32.339 "thread": "nvmf_tgt_poll_group_000", 00:20:32.339 "listen_address": { 00:20:32.339 "trtype": "TCP", 00:20:32.339 "adrfam": "IPv4", 00:20:32.339 "traddr": "10.0.0.2", 00:20:32.339 "trsvcid": "4420" 00:20:32.339 }, 00:20:32.339 "peer_address": { 00:20:32.339 "trtype": "TCP", 00:20:32.339 "adrfam": "IPv4", 00:20:32.339 "traddr": "10.0.0.1", 00:20:32.339 "trsvcid": "57574" 00:20:32.339 }, 00:20:32.339 "auth": { 00:20:32.339 "state": "completed", 00:20:32.339 "digest": "sha512", 00:20:32.339 "dhgroup": "ffdhe6144" 00:20:32.339 } 00:20:32.339 } 00:20:32.339 ]' 00:20:32.339 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.339 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.339 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.339 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.339 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.339 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.339 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.339 23:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.597 23:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:20:33.569 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.569 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.569 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.569 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.569 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.569 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.569 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:33.569 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:33.827 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:33.827 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.827 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:33.828 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:33.828 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:33.828 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.828 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.828 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.828 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.828 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.828 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.828 23:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.393 00:20:34.393 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.393 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.393 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.651 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.651 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.651 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.651 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.651 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.651 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.651 { 00:20:34.651 "cntlid": 133, 00:20:34.651 "qid": 0, 00:20:34.651 "state": "enabled", 00:20:34.651 "thread": "nvmf_tgt_poll_group_000", 00:20:34.651 "listen_address": { 00:20:34.651 "trtype": "TCP", 00:20:34.651 "adrfam": "IPv4", 00:20:34.651 "traddr": "10.0.0.2", 00:20:34.651 "trsvcid": "4420" 00:20:34.651 }, 00:20:34.651 "peer_address": { 00:20:34.651 "trtype": "TCP", 00:20:34.651 "adrfam": "IPv4", 00:20:34.651 "traddr": "10.0.0.1", 00:20:34.651 "trsvcid": "57588" 00:20:34.651 }, 00:20:34.651 "auth": { 00:20:34.651 "state": "completed", 00:20:34.651 "digest": "sha512", 00:20:34.651 "dhgroup": "ffdhe6144" 00:20:34.651 } 00:20:34.651 } 00:20:34.651 ]' 00:20:34.651 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.908 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.909 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.909 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:34.909 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.909 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.909 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.909 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.166 23:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:20:36.098 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.098 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.098 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.098 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.098 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.098 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.098 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:36.098 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:36.355 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:36.355 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.355 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:36.355 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:36.355 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:36.355 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.355 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:36.355 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.355 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.355 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.355 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.355 23:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.919 00:20:36.919 23:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.919 23:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.919 23:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.177 23:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.177 23:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.177 23:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.177 23:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.177 23:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.177 23:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.177 { 00:20:37.177 "cntlid": 135, 00:20:37.177 "qid": 0, 00:20:37.177 "state": "enabled", 00:20:37.177 "thread": "nvmf_tgt_poll_group_000", 00:20:37.177 "listen_address": { 00:20:37.177 "trtype": "TCP", 00:20:37.177 "adrfam": "IPv4", 00:20:37.177 "traddr": "10.0.0.2", 00:20:37.177 "trsvcid": "4420" 00:20:37.177 }, 00:20:37.177 "peer_address": { 00:20:37.177 "trtype": "TCP", 00:20:37.177 "adrfam": "IPv4", 00:20:37.177 "traddr": "10.0.0.1", 00:20:37.177 "trsvcid": "57620" 00:20:37.177 }, 00:20:37.177 "auth": { 00:20:37.177 "state": "completed", 00:20:37.177 "digest": "sha512", 00:20:37.177 "dhgroup": "ffdhe6144" 00:20:37.177 } 00:20:37.177 } 00:20:37.177 ]' 00:20:37.177 23:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.177 23:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.177 23:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.177 23:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:37.177 23:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.177 23:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.177 23:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.177 23:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.435 23:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.804 23:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.734 00:20:39.734 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.734 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.734 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.991 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.991 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.991 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.991 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.991 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.991 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.991 { 00:20:39.991 "cntlid": 137, 00:20:39.991 "qid": 0, 00:20:39.991 "state": "enabled", 00:20:39.991 "thread": "nvmf_tgt_poll_group_000", 00:20:39.991 "listen_address": { 00:20:39.991 "trtype": "TCP", 00:20:39.991 "adrfam": "IPv4", 00:20:39.991 "traddr": "10.0.0.2", 00:20:39.991 "trsvcid": "4420" 00:20:39.991 }, 00:20:39.991 "peer_address": { 00:20:39.991 "trtype": "TCP", 00:20:39.991 "adrfam": "IPv4", 00:20:39.991 "traddr": "10.0.0.1", 00:20:39.991 "trsvcid": "57656" 00:20:39.991 }, 00:20:39.991 "auth": { 00:20:39.991 "state": "completed", 00:20:39.991 "digest": "sha512", 00:20:39.991 "dhgroup": "ffdhe8192" 00:20:39.991 } 00:20:39.991 } 00:20:39.991 ]' 00:20:39.991 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.991 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.991 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.991 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.991 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.991 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.991 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.991 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.555 23:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:20:41.490 23:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.490 23:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.490 23:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.490 23:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.490 23:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.490 23:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.490 23:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:41.490 23:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:41.769 23:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:41.770 23:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.770 23:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:41.770 23:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:41.770 23:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:41.770 23:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.770 23:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.770 23:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.770 23:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.770 23:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.770 23:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.770 23:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.707 00:20:42.707 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.707 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.707 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.707 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.707 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.707 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.707 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.707 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.707 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.707 { 00:20:42.707 "cntlid": 139, 00:20:42.707 "qid": 0, 00:20:42.707 "state": "enabled", 00:20:42.707 "thread": "nvmf_tgt_poll_group_000", 00:20:42.707 "listen_address": { 00:20:42.707 "trtype": "TCP", 00:20:42.707 "adrfam": "IPv4", 00:20:42.707 "traddr": "10.0.0.2", 00:20:42.707 "trsvcid": "4420" 00:20:42.707 }, 00:20:42.707 "peer_address": { 00:20:42.707 "trtype": "TCP", 00:20:42.707 "adrfam": "IPv4", 00:20:42.707 "traddr": "10.0.0.1", 00:20:42.707 "trsvcid": "36812" 00:20:42.707 }, 00:20:42.707 "auth": { 00:20:42.707 "state": "completed", 00:20:42.707 "digest": "sha512", 00:20:42.707 "dhgroup": "ffdhe8192" 00:20:42.707 } 00:20:42.707 } 00:20:42.707 ]' 00:20:42.707 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.964 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.964 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.964 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:42.964 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.964 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.964 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.964 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.222 23:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YmM0YzQ1OTVjZmQyNGIxZjczNzQxMDQwMDU3ZDBmYTDiTEuE: --dhchap-ctrl-secret DHHC-1:02:YTBjNDM0YTUyOGRkMTdmNWE0N2VkNmJlNzcyZGNiNTg1ZTIxOGQ2MTZlNTg1MTAzl9e2zQ==: 00:20:44.155 23:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.155 23:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.155 23:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.155 23:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.155 23:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.155 23:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.155 23:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:44.155 23:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:44.413 23:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:44.413 23:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.413 23:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:44.413 23:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:44.413 23:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:44.413 23:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.413 23:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.413 23:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.413 23:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.413 23:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.413 23:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.413 23:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.347 00:20:45.347 23:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.347 23:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.347 23:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.605 23:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.605 23:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.605 23:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.605 23:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.605 23:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.605 23:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.605 { 00:20:45.605 "cntlid": 141, 00:20:45.605 "qid": 0, 00:20:45.605 "state": "enabled", 00:20:45.605 "thread": "nvmf_tgt_poll_group_000", 00:20:45.605 "listen_address": { 00:20:45.605 "trtype": "TCP", 00:20:45.605 "adrfam": "IPv4", 00:20:45.605 "traddr": "10.0.0.2", 00:20:45.605 "trsvcid": "4420" 00:20:45.605 }, 00:20:45.605 "peer_address": { 00:20:45.605 "trtype": "TCP", 00:20:45.605 "adrfam": "IPv4", 00:20:45.605 "traddr": "10.0.0.1", 00:20:45.605 "trsvcid": "36846" 00:20:45.605 }, 00:20:45.605 "auth": { 00:20:45.605 "state": "completed", 00:20:45.605 "digest": "sha512", 00:20:45.605 "dhgroup": "ffdhe8192" 00:20:45.605 } 00:20:45.605 } 00:20:45.605 ]' 00:20:45.605 23:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.605 23:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.605 23:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.605 23:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:45.605 23:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.605 23:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.605 23:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.606 23:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.862 23:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MjVkZjZhMTc0NDY2ZjAxMTczYzM5ZGYzMGQ1ZmZhYThlY2VmOTFmZjc5NTI5ZjNlZdE4Ag==: --dhchap-ctrl-secret DHHC-1:01:M2MwNjMzMGIzYzdlZTQ0ZDgzZDE1M2JiNTZkZTA2ZTT5I6h5: 00:20:46.793 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.794 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.794 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.794 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.794 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.794 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.794 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:46.794 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:47.050 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:47.050 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.050 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:47.050 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:47.050 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:47.050 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.050 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:47.050 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.050 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.050 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.050 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.050 23:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.982 00:20:47.982 23:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.982 23:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.982 23:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.239 23:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.239 23:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.239 23:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.239 23:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.239 23:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.239 23:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.239 { 00:20:48.239 "cntlid": 143, 00:20:48.239 "qid": 0, 00:20:48.239 "state": "enabled", 00:20:48.239 "thread": "nvmf_tgt_poll_group_000", 00:20:48.239 "listen_address": { 00:20:48.239 "trtype": "TCP", 00:20:48.239 "adrfam": "IPv4", 00:20:48.239 "traddr": "10.0.0.2", 00:20:48.239 "trsvcid": "4420" 00:20:48.239 }, 00:20:48.239 "peer_address": { 00:20:48.239 "trtype": "TCP", 00:20:48.239 "adrfam": "IPv4", 00:20:48.239 "traddr": "10.0.0.1", 00:20:48.239 "trsvcid": "36892" 00:20:48.239 }, 00:20:48.239 "auth": { 00:20:48.239 "state": "completed", 00:20:48.239 "digest": "sha512", 00:20:48.239 "dhgroup": "ffdhe8192" 00:20:48.239 } 00:20:48.239 } 00:20:48.239 ]' 00:20:48.239 23:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.239 23:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.239 23:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.239 23:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:48.497 23:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.497 23:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.497 23:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.497 23:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.755 23:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:20:49.688 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.688 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.688 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.688 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.688 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.688 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:49.688 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:49.688 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:49.688 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:49.688 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:49.688 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:49.946 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:49.946 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.946 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:49.946 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:49.946 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:49.946 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.946 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.946 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.946 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.946 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.946 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.946 23:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.879 00:20:50.879 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.879 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.879 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.879 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.879 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.879 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.879 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.879 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.879 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.879 { 00:20:50.879 "cntlid": 145, 00:20:50.880 "qid": 0, 00:20:50.880 "state": "enabled", 00:20:50.880 "thread": "nvmf_tgt_poll_group_000", 00:20:50.880 "listen_address": { 00:20:50.880 "trtype": "TCP", 00:20:50.880 "adrfam": "IPv4", 00:20:50.880 "traddr": "10.0.0.2", 00:20:50.880 "trsvcid": "4420" 00:20:50.880 }, 00:20:50.880 "peer_address": { 00:20:50.880 "trtype": "TCP", 00:20:50.880 "adrfam": "IPv4", 00:20:50.880 "traddr": "10.0.0.1", 00:20:50.880 "trsvcid": "36930" 00:20:50.880 }, 00:20:50.880 "auth": { 00:20:50.880 "state": "completed", 00:20:50.880 "digest": "sha512", 00:20:50.880 "dhgroup": "ffdhe8192" 00:20:50.880 } 00:20:50.880 } 00:20:50.880 ]' 00:20:50.880 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.137 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.137 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.137 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.137 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.137 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.137 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.137 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.395 23:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzBmMTk1MzcwNDJkNzQ3NmUzYmNiN2M1ZDYzMmEzYmVlZTFhNmRmMGJiY2U0ZmY5LIqEHw==: --dhchap-ctrl-secret DHHC-1:03:Yzk1Y2U4MGQxMzA0NDk3NjYwNDZkZjQ3ZDlmYTEwNTUwMzUzOGU2ZjQyZmUwZGEzMjZmNThhMzNhM2VhMjUwMvkt5lk=: 00:20:52.328 23:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.328 23:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.328 23:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.328 23:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.328 23:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.329 23:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:20:52.329 23:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.329 23:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.329 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.329 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:52.329 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:52.329 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:52.329 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:52.329 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:52.329 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:52.329 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:52.329 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:52.329 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:53.262 request: 00:20:53.262 { 00:20:53.262 "name": "nvme0", 00:20:53.262 "trtype": "tcp", 00:20:53.262 "traddr": "10.0.0.2", 00:20:53.262 "adrfam": "ipv4", 00:20:53.262 "trsvcid": "4420", 00:20:53.262 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:53.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:53.262 "prchk_reftag": false, 00:20:53.262 "prchk_guard": false, 00:20:53.262 "hdgst": false, 00:20:53.262 "ddgst": false, 00:20:53.262 "dhchap_key": "key2", 00:20:53.262 "method": "bdev_nvme_attach_controller", 00:20:53.262 "req_id": 1 00:20:53.262 } 00:20:53.262 Got JSON-RPC error response 00:20:53.262 response: 00:20:53.262 { 00:20:53.262 "code": -5, 00:20:53.262 "message": "Input/output error" 00:20:53.262 } 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:53.262 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:53.263 23:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:54.198 request: 00:20:54.198 { 00:20:54.198 "name": "nvme0", 00:20:54.198 "trtype": "tcp", 00:20:54.198 "traddr": "10.0.0.2", 00:20:54.198 "adrfam": "ipv4", 00:20:54.198 "trsvcid": "4420", 00:20:54.198 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:54.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:54.198 "prchk_reftag": false, 00:20:54.198 "prchk_guard": false, 00:20:54.198 "hdgst": false, 00:20:54.198 "ddgst": false, 00:20:54.198 "dhchap_key": "key1", 00:20:54.198 "dhchap_ctrlr_key": "ckey2", 00:20:54.198 "method": "bdev_nvme_attach_controller", 00:20:54.198 "req_id": 1 00:20:54.198 } 00:20:54.198 Got JSON-RPC error response 00:20:54.198 response: 00:20:54.198 { 00:20:54.198 "code": -5, 00:20:54.198 "message": "Input/output error" 00:20:54.198 } 00:20:54.198 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:54.198 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:54.198 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:54.198 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:54.199 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.199 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.199 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.199 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.199 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:20:54.199 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.199 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.199 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.199 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.199 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:54.199 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.199 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:54.199 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.199 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:54.199 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.199 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.199 23:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.136 request: 00:20:55.136 { 00:20:55.136 "name": "nvme0", 00:20:55.136 "trtype": "tcp", 00:20:55.136 "traddr": "10.0.0.2", 00:20:55.136 "adrfam": "ipv4", 00:20:55.136 "trsvcid": "4420", 00:20:55.136 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:55.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.136 "prchk_reftag": false, 00:20:55.136 "prchk_guard": false, 00:20:55.136 "hdgst": false, 00:20:55.136 "ddgst": false, 00:20:55.136 "dhchap_key": "key1", 00:20:55.136 "dhchap_ctrlr_key": "ckey1", 00:20:55.136 "method": "bdev_nvme_attach_controller", 00:20:55.136 "req_id": 1 00:20:55.136 } 00:20:55.136 Got JSON-RPC error response 00:20:55.136 response: 00:20:55.136 { 00:20:55.136 "code": -5, 00:20:55.136 "message": "Input/output error" 00:20:55.136 } 00:20:55.136 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:55.136 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:55.136 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:55.136 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:55.136 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.136 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.136 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.136 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.136 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1387196 00:20:55.136 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1387196 ']' 00:20:55.136 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1387196 00:20:55.136 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:55.136 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:55.136 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1387196 00:20:55.136 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:55.137 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:55.137 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1387196' 00:20:55.137 killing process with pid 1387196 00:20:55.137 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1387196 00:20:55.137 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1387196 00:20:55.396 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:55.396 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:55.396 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:55.396 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.396 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1409711 00:20:55.396 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:55.396 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1409711 00:20:55.396 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1409711 ']' 00:20:55.396 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.396 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:55.396 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.396 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:55.396 23:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.654 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:55.654 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:55.654 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:55.654 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:55.654 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.654 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.654 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:55.654 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1409711 00:20:55.654 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1409711 ']' 00:20:55.654 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.654 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:55.654 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.654 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:55.654 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.912 23:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.848 00:20:56.848 23:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.848 23:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.848 23:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.106 23:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.106 23:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.106 23:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.106 23:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.106 23:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.106 23:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.106 { 00:20:57.106 "cntlid": 1, 00:20:57.106 "qid": 0, 00:20:57.106 "state": "enabled", 00:20:57.106 "thread": "nvmf_tgt_poll_group_000", 00:20:57.106 "listen_address": { 00:20:57.106 "trtype": "TCP", 00:20:57.106 "adrfam": "IPv4", 00:20:57.106 "traddr": "10.0.0.2", 00:20:57.106 "trsvcid": "4420" 00:20:57.106 }, 00:20:57.106 "peer_address": { 00:20:57.106 "trtype": "TCP", 00:20:57.106 "adrfam": "IPv4", 00:20:57.106 "traddr": "10.0.0.1", 00:20:57.106 "trsvcid": "53368" 00:20:57.106 }, 00:20:57.106 "auth": { 00:20:57.106 "state": "completed", 00:20:57.106 "digest": "sha512", 00:20:57.106 "dhgroup": "ffdhe8192" 00:20:57.106 } 00:20:57.106 } 00:20:57.106 ]' 00:20:57.107 23:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.107 23:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.107 23:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.107 23:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.107 23:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.107 23:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.107 23:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.107 23:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.365 23:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzJlMDZkNjg2ZGI4NzE0MGRmZjhmMjliZGVkODM4ZjAyOTExMzk3MTZiOTk4MDQ3YzhkNWU4NTE4YWVjMmNmNsBHm6I=: 00:20:58.299 23:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.299 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.299 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.299 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.299 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.299 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:58.299 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.299 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.299 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.299 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:58.299 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:58.556 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.556 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:58.556 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.556 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:58.556 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.556 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:58.556 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.556 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.556 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.818 request: 00:20:58.818 { 00:20:58.818 "name": "nvme0", 00:20:58.818 "trtype": "tcp", 00:20:58.818 "traddr": "10.0.0.2", 00:20:58.818 "adrfam": "ipv4", 00:20:58.818 "trsvcid": "4420", 00:20:58.818 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:58.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:58.818 "prchk_reftag": false, 00:20:58.818 "prchk_guard": false, 00:20:58.818 "hdgst": false, 00:20:58.818 "ddgst": false, 00:20:58.818 "dhchap_key": "key3", 00:20:58.818 "method": "bdev_nvme_attach_controller", 00:20:58.818 "req_id": 1 00:20:58.818 } 00:20:58.818 Got JSON-RPC error response 00:20:58.818 response: 00:20:58.818 { 00:20:58.818 "code": -5, 00:20:58.818 "message": "Input/output error" 00:20:58.818 } 00:20:58.818 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:58.818 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:58.818 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:58.818 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:58.818 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:58.818 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:58.818 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:58.818 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:59.123 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.123 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:59.123 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.123 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:59.123 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:59.123 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:59.123 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:59.123 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.123 23:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.381 request: 00:20:59.381 { 00:20:59.381 "name": "nvme0", 00:20:59.381 "trtype": "tcp", 00:20:59.381 "traddr": "10.0.0.2", 00:20:59.381 "adrfam": "ipv4", 00:20:59.381 "trsvcid": "4420", 00:20:59.381 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:59.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:59.381 "prchk_reftag": false, 00:20:59.381 "prchk_guard": false, 00:20:59.381 "hdgst": false, 00:20:59.381 "ddgst": false, 00:20:59.381 "dhchap_key": "key3", 00:20:59.381 "method": "bdev_nvme_attach_controller", 00:20:59.381 "req_id": 1 00:20:59.381 } 00:20:59.381 Got JSON-RPC error response 00:20:59.381 response: 00:20:59.381 { 00:20:59.381 "code": -5, 00:20:59.381 "message": "Input/output error" 00:20:59.381 } 00:20:59.381 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:59.381 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:59.381 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:59.381 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:59.381 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:59.381 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:59.381 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:59.381 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:59.381 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:59.381 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:59.638 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.638 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.638 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.638 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.638 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.638 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.638 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.638 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.638 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:59.638 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:59.639 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:59.639 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:59.639 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:59.639 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:59.639 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:59.639 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:59.639 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:59.896 request: 00:20:59.896 { 00:20:59.896 "name": "nvme0", 00:20:59.896 "trtype": "tcp", 00:20:59.896 "traddr": "10.0.0.2", 00:20:59.896 "adrfam": "ipv4", 00:20:59.896 "trsvcid": "4420", 00:20:59.896 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:59.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:59.896 "prchk_reftag": false, 00:20:59.896 "prchk_guard": false, 00:20:59.896 "hdgst": false, 00:20:59.896 "ddgst": false, 00:20:59.896 "dhchap_key": "key0", 00:20:59.896 "dhchap_ctrlr_key": "key1", 00:20:59.896 "method": "bdev_nvme_attach_controller", 00:20:59.896 "req_id": 1 00:20:59.896 } 00:20:59.896 Got JSON-RPC error response 00:20:59.896 response: 00:20:59.896 { 00:20:59.896 "code": -5, 00:20:59.896 "message": "Input/output error" 00:20:59.896 } 00:20:59.896 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:59.896 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:59.896 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:59.896 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:59.896 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:59.896 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:00.153 00:21:00.153 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:00.153 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:00.153 23:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.411 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.411 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.411 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.668 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:00.668 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:00.668 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1387223 00:21:00.668 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1387223 ']' 00:21:00.668 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1387223 00:21:00.668 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:00.668 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:00.668 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1387223 00:21:00.927 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:00.927 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:00.927 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1387223' 00:21:00.927 killing process with pid 1387223 00:21:00.927 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1387223 00:21:00.927 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1387223 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:01.187 rmmod nvme_tcp 00:21:01.187 rmmod nvme_fabrics 00:21:01.187 rmmod nvme_keyring 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1409711 ']' 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1409711 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1409711 ']' 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1409711 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1409711 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1409711' 00:21:01.187 killing process with pid 1409711 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1409711 00:21:01.187 23:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1409711 00:21:01.445 23:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:01.445 23:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:01.445 23:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:01.445 23:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:01.445 23:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:01.445 23:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.445 23:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.445 23:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.xhg /tmp/spdk.key-sha256.Ko2 /tmp/spdk.key-sha384.C1J /tmp/spdk.key-sha512.gDw /tmp/spdk.key-sha512.FOH /tmp/spdk.key-sha384.dbv /tmp/spdk.key-sha256.eOw '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:03.977 00:21:03.977 real 3m8.972s 00:21:03.977 user 7m20.163s 00:21:03.977 sys 0m24.857s 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.977 ************************************ 00:21:03.977 END TEST nvmf_auth_target 00:21:03.977 ************************************ 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:03.977 ************************************ 00:21:03.977 START TEST nvmf_bdevio_no_huge 00:21:03.977 ************************************ 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:03.977 * Looking for test storage... 00:21:03.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:03.977 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.978 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.978 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.978 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:03.978 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:03.978 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:03.978 23:27:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:05.876 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.876 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:05.877 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:05.877 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:05.877 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:05.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:21:05.877 00:21:05.877 --- 10.0.0.2 ping statistics --- 00:21:05.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.877 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:21:05.877 00:21:05.877 --- 10.0.0.1 ping statistics --- 00:21:05.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.877 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1412473 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1412473 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1412473 ']' 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:05.877 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.878 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:05.878 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:05.878 [2024-07-25 23:27:03.366731] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:05.878 [2024-07-25 23:27:03.366808] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:05.878 [2024-07-25 23:27:03.420464] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:05.878 [2024-07-25 23:27:03.440101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:05.878 [2024-07-25 23:27:03.521929] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.878 [2024-07-25 23:27:03.521990] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.878 [2024-07-25 23:27:03.522017] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.878 [2024-07-25 23:27:03.522028] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.878 [2024-07-25 23:27:03.522037] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.878 [2024-07-25 23:27:03.522186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:05.878 [2024-07-25 23:27:03.522241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:05.878 [2024-07-25 23:27:03.522292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:05.878 [2024-07-25 23:27:03.522305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.135 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:06.136 [2024-07-25 23:27:03.636195] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:06.136 Malloc0 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:06.136 [2024-07-25 23:27:03.673841] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.136 { 00:21:06.136 "params": { 00:21:06.136 "name": "Nvme$subsystem", 00:21:06.136 "trtype": "$TEST_TRANSPORT", 00:21:06.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.136 "adrfam": "ipv4", 00:21:06.136 "trsvcid": "$NVMF_PORT", 00:21:06.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.136 "hdgst": ${hdgst:-false}, 00:21:06.136 "ddgst": ${ddgst:-false} 00:21:06.136 }, 00:21:06.136 "method": "bdev_nvme_attach_controller" 00:21:06.136 } 00:21:06.136 EOF 00:21:06.136 )") 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:06.136 23:27:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:06.136 "params": { 00:21:06.136 "name": "Nvme1", 00:21:06.136 "trtype": "tcp", 00:21:06.136 "traddr": "10.0.0.2", 00:21:06.136 "adrfam": "ipv4", 00:21:06.136 "trsvcid": "4420", 00:21:06.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:06.136 "hdgst": false, 00:21:06.136 "ddgst": false 00:21:06.136 }, 00:21:06.136 "method": "bdev_nvme_attach_controller" 00:21:06.136 }' 00:21:06.136 [2024-07-25 23:27:03.717684] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:06.136 [2024-07-25 23:27:03.717775] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1412617 ] 00:21:06.136 [2024-07-25 23:27:03.761250] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:06.136 [2024-07-25 23:27:03.781182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:06.393 [2024-07-25 23:27:03.865822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.393 [2024-07-25 23:27:03.865871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.393 [2024-07-25 23:27:03.865874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.394 I/O targets: 00:21:06.394 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:06.394 00:21:06.394 00:21:06.394 CUnit - A unit testing framework for C - Version 2.1-3 00:21:06.394 http://cunit.sourceforge.net/ 00:21:06.394 00:21:06.394 00:21:06.394 Suite: bdevio tests on: Nvme1n1 00:21:06.394 Test: blockdev write read block ...passed 00:21:06.394 Test: blockdev write zeroes read block ...passed 00:21:06.394 Test: blockdev write zeroes read no split ...passed 00:21:06.652 Test: blockdev write zeroes read split ...passed 00:21:06.652 Test: blockdev write zeroes read split partial ...passed 00:21:06.652 Test: blockdev reset ...[2024-07-25 23:27:04.184443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:06.652 [2024-07-25 23:27:04.184548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ba330 (9): Bad file descriptor 00:21:06.652 [2024-07-25 23:27:04.198322] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:06.652 passed 00:21:06.652 Test: blockdev write read 8 blocks ...passed 00:21:06.652 Test: blockdev write read size > 128k ...passed 00:21:06.652 Test: blockdev write read invalid size ...passed 00:21:06.652 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:06.652 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:06.652 Test: blockdev write read max offset ...passed 00:21:06.652 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:06.652 Test: blockdev writev readv 8 blocks ...passed 00:21:06.652 Test: blockdev writev readv 30 x 1block ...passed 00:21:06.652 Test: blockdev writev readv block ...passed 00:21:06.652 Test: blockdev writev readv size > 128k ...passed 00:21:06.652 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:06.652 Test: blockdev comparev and writev ...[2024-07-25 23:27:04.371594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.652 [2024-07-25 23:27:04.371632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.652 [2024-07-25 23:27:04.371656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.652 [2024-07-25 23:27:04.371673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.652 [2024-07-25 23:27:04.372018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.652 [2024-07-25 23:27:04.372042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:06.652 [2024-07-25 23:27:04.372072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.652 [2024-07-25 23:27:04.372090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:06.652 [2024-07-25 23:27:04.372432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.652 [2024-07-25 23:27:04.372455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:06.652 [2024-07-25 23:27:04.372477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.652 [2024-07-25 23:27:04.372493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:06.652 [2024-07-25 23:27:04.372830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.652 [2024-07-25 23:27:04.372854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:06.652 [2024-07-25 23:27:04.372876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.652 [2024-07-25 23:27:04.372892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:06.912 passed 00:21:06.912 Test: blockdev nvme passthru rw ...passed 00:21:06.912 Test: blockdev nvme passthru vendor specific ...[2024-07-25 23:27:04.457349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:06.912 [2024-07-25 23:27:04.457377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:06.912 [2024-07-25 23:27:04.457542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:06.912 [2024-07-25 23:27:04.457565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:06.912 [2024-07-25 23:27:04.457724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:06.912 [2024-07-25 23:27:04.457752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:06.912 [2024-07-25 23:27:04.457917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:06.912 [2024-07-25 23:27:04.457939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:06.912 passed 00:21:06.912 Test: blockdev nvme admin passthru ...passed 00:21:06.912 Test: blockdev copy ...passed 00:21:06.912 00:21:06.912 Run Summary: Type Total Ran Passed Failed Inactive 00:21:06.912 suites 1 1 n/a 0 0 00:21:06.912 tests 23 23 23 0 0 00:21:06.912 asserts 152 152 152 0 n/a 00:21:06.912 00:21:06.912 Elapsed time = 0.991 seconds 00:21:07.171 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:07.171 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.171 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:07.171 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.171 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:07.171 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:07.171 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:07.171 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:07.171 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:07.171 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:07.171 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:07.171 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:07.171 rmmod nvme_tcp 00:21:07.171 rmmod nvme_fabrics 00:21:07.171 rmmod nvme_keyring 00:21:07.431 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:07.431 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:07.431 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:07.431 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1412473 ']' 00:21:07.431 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1412473 00:21:07.431 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1412473 ']' 00:21:07.431 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1412473 00:21:07.431 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:21:07.431 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:07.431 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1412473 00:21:07.431 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:21:07.431 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:21:07.431 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1412473' 00:21:07.431 killing process with pid 1412473 00:21:07.431 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1412473 00:21:07.431 23:27:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1412473 00:21:07.689 23:27:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:07.689 23:27:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:07.689 23:27:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:07.689 23:27:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:07.689 23:27:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:07.689 23:27:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.690 23:27:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.690 23:27:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:10.224 00:21:10.224 real 0m6.147s 00:21:10.224 user 0m9.310s 00:21:10.224 sys 0m2.413s 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:10.224 ************************************ 00:21:10.224 END TEST nvmf_bdevio_no_huge 00:21:10.224 ************************************ 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:10.224 ************************************ 00:21:10.224 START TEST nvmf_tls 00:21:10.224 ************************************ 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:10.224 * Looking for test storage... 00:21:10.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:10.224 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:10.225 23:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:12.128 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:12.128 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:12.128 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.128 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:12.128 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:12.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:21:12.129 00:21:12.129 --- 10.0.0.2 ping statistics --- 00:21:12.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.129 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:21:12.129 00:21:12.129 --- 10.0.0.1 ping statistics --- 00:21:12.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.129 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1415186 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1415186 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1415186 ']' 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.129 [2024-07-25 23:27:09.526474] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:12.129 [2024-07-25 23:27:09.526559] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.129 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.129 [2024-07-25 23:27:09.565113] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:12.129 [2024-07-25 23:27:09.591812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.129 [2024-07-25 23:27:09.676306] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.129 [2024-07-25 23:27:09.676379] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.129 [2024-07-25 23:27:09.676393] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.129 [2024-07-25 23:27:09.676412] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.129 [2024-07-25 23:27:09.676421] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.129 [2024-07-25 23:27:09.676446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:12.129 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:12.386 true 00:21:12.386 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:12.386 23:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:12.646 23:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:12.646 23:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:12.646 23:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:12.906 23:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:12.906 23:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:13.165 23:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:13.165 23:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:13.165 23:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:13.422 23:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:13.422 23:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:13.680 23:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:13.680 23:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:13.680 23:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:13.680 23:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:13.938 23:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:13.938 23:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:13.938 23:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:14.196 23:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:14.196 23:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:14.456 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:14.456 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:14.456 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:14.715 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:14.715 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.5oiS6RtMO9 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.ANANygW422 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.5oiS6RtMO9 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ANANygW422 00:21:14.974 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:15.233 23:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:15.804 23:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.5oiS6RtMO9 00:21:15.804 23:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5oiS6RtMO9 00:21:15.805 23:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:16.065 [2024-07-25 23:27:13.602466] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.065 23:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:16.325 23:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:16.612 [2024-07-25 23:27:14.143913] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:16.613 [2024-07-25 23:27:14.144184] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.613 23:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:16.869 malloc0 00:21:16.869 23:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:17.126 23:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5oiS6RtMO9 00:21:17.385 [2024-07-25 23:27:14.913101] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:17.385 23:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.5oiS6RtMO9 00:21:17.385 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.369 Initializing NVMe Controllers 00:21:27.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:27.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:27.369 Initialization complete. Launching workers. 00:21:27.369 ======================================================== 00:21:27.369 Latency(us) 00:21:27.369 Device Information : IOPS MiB/s Average min max 00:21:27.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7785.80 30.41 8222.87 1225.71 9896.94 00:21:27.369 ======================================================== 00:21:27.369 Total : 7785.80 30.41 8222.87 1225.71 9896.94 00:21:27.369 00:21:27.369 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5oiS6RtMO9 00:21:27.369 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:27.369 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:27.369 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:27.369 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5oiS6RtMO9' 00:21:27.369 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:27.369 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1416988 00:21:27.369 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:27.369 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:27.369 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1416988 /var/tmp/bdevperf.sock 00:21:27.369 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1416988 ']' 00:21:27.369 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.369 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:27.369 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.369 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:27.369 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.369 [2024-07-25 23:27:25.085366] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:27.369 [2024-07-25 23:27:25.085455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416988 ] 00:21:27.626 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.626 [2024-07-25 23:27:25.118818] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:27.626 [2024-07-25 23:27:25.147772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.626 [2024-07-25 23:27:25.231834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.626 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:27.626 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:27.626 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5oiS6RtMO9 00:21:27.883 [2024-07-25 23:27:25.559102] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.883 [2024-07-25 23:27:25.559213] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:28.139 TLSTESTn1 00:21:28.139 23:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:28.139 Running I/O for 10 seconds... 00:21:38.117 00:21:38.117 Latency(us) 00:21:38.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.117 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:38.117 Verification LBA range: start 0x0 length 0x2000 00:21:38.117 TLSTESTn1 : 10.02 3586.50 14.01 0.00 0.00 35619.72 9077.95 55535.69 00:21:38.117 =================================================================================================================== 00:21:38.117 Total : 3586.50 14.01 0.00 0.00 35619.72 9077.95 55535.69 00:21:38.117 0 00:21:38.117 23:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:38.117 23:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1416988 00:21:38.117 23:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1416988 ']' 00:21:38.117 23:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1416988 00:21:38.117 23:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:38.117 23:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:38.117 23:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1416988 00:21:38.117 23:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:38.117 23:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:38.117 23:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1416988' 00:21:38.117 killing process with pid 1416988 00:21:38.117 23:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1416988 00:21:38.117 Received shutdown signal, test time was about 10.000000 seconds 00:21:38.117 00:21:38.117 Latency(us) 00:21:38.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.117 =================================================================================================================== 00:21:38.117 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.118 [2024-07-25 23:27:35.839019] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:38.118 23:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1416988 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ANANygW422 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ANANygW422 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ANANygW422 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ANANygW422' 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1418289 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1418289 /var/tmp/bdevperf.sock 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1418289 ']' 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:38.376 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.636 [2024-07-25 23:27:36.116268] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:38.636 [2024-07-25 23:27:36.116343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418289 ] 00:21:38.636 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.636 [2024-07-25 23:27:36.148519] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:38.636 [2024-07-25 23:27:36.177109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.636 [2024-07-25 23:27:36.263805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.896 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:38.896 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:38.896 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ANANygW422 00:21:38.896 [2024-07-25 23:27:36.614703] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:38.896 [2024-07-25 23:27:36.614826] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:39.157 [2024-07-25 23:27:36.626149] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:39.157 [2024-07-25 23:27:36.626953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d878d0 (107): Transport endpoint is not connected 00:21:39.157 [2024-07-25 23:27:36.627943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d878d0 (9): Bad file descriptor 00:21:39.157 [2024-07-25 23:27:36.628943] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:39.157 [2024-07-25 23:27:36.628962] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:39.157 [2024-07-25 23:27:36.628993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:39.157 request: 00:21:39.157 { 00:21:39.157 "name": "TLSTEST", 00:21:39.157 "trtype": "tcp", 00:21:39.157 "traddr": "10.0.0.2", 00:21:39.157 "adrfam": "ipv4", 00:21:39.157 "trsvcid": "4420", 00:21:39.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:39.157 "prchk_reftag": false, 00:21:39.157 "prchk_guard": false, 00:21:39.157 "hdgst": false, 00:21:39.157 "ddgst": false, 00:21:39.157 "psk": "/tmp/tmp.ANANygW422", 00:21:39.157 "method": "bdev_nvme_attach_controller", 00:21:39.157 "req_id": 1 00:21:39.157 } 00:21:39.157 Got JSON-RPC error response 00:21:39.157 response: 00:21:39.157 { 00:21:39.157 "code": -5, 00:21:39.157 "message": "Input/output error" 00:21:39.157 } 00:21:39.157 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1418289 00:21:39.157 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1418289 ']' 00:21:39.157 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1418289 00:21:39.157 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:39.157 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:39.157 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1418289 00:21:39.157 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:39.157 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:39.157 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1418289' 00:21:39.157 killing process with pid 1418289 00:21:39.157 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1418289 00:21:39.157 Received shutdown signal, test time was about 10.000000 seconds 00:21:39.157 00:21:39.157 Latency(us) 00:21:39.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.157 =================================================================================================================== 00:21:39.157 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:39.157 [2024-07-25 23:27:36.680178] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:39.157 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1418289 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.5oiS6RtMO9 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.5oiS6RtMO9 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.5oiS6RtMO9 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5oiS6RtMO9' 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1418425 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1418425 /var/tmp/bdevperf.sock 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1418425 ']' 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:39.419 23:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.419 [2024-07-25 23:27:36.946442] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:39.419 [2024-07-25 23:27:36.946519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418425 ] 00:21:39.419 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.419 [2024-07-25 23:27:36.977364] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:39.419 [2024-07-25 23:27:37.004023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.419 [2024-07-25 23:27:37.084497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.678 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:39.678 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:39.678 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.5oiS6RtMO9 00:21:39.936 [2024-07-25 23:27:37.440609] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:39.936 [2024-07-25 23:27:37.440737] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:39.936 [2024-07-25 23:27:37.448107] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:39.936 [2024-07-25 23:27:37.448141] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:39.936 [2024-07-25 23:27:37.448187] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:39.936 [2024-07-25 23:27:37.448533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a8d0 (107): Transport endpoint is not connected 00:21:39.936 [2024-07-25 23:27:37.449522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a8d0 (9): Bad file descriptor 00:21:39.936 [2024-07-25 23:27:37.450526] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:39.936 [2024-07-25 23:27:37.450545] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:39.936 [2024-07-25 23:27:37.450578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:39.936 request: 00:21:39.936 { 00:21:39.936 "name": "TLSTEST", 00:21:39.936 "trtype": "tcp", 00:21:39.936 "traddr": "10.0.0.2", 00:21:39.936 "adrfam": "ipv4", 00:21:39.936 "trsvcid": "4420", 00:21:39.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.936 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:39.936 "prchk_reftag": false, 00:21:39.936 "prchk_guard": false, 00:21:39.936 "hdgst": false, 00:21:39.936 "ddgst": false, 00:21:39.936 "psk": "/tmp/tmp.5oiS6RtMO9", 00:21:39.936 "method": "bdev_nvme_attach_controller", 00:21:39.936 "req_id": 1 00:21:39.936 } 00:21:39.936 Got JSON-RPC error response 00:21:39.936 response: 00:21:39.936 { 00:21:39.936 "code": -5, 00:21:39.936 "message": "Input/output error" 00:21:39.936 } 00:21:39.936 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1418425 00:21:39.936 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1418425 ']' 00:21:39.936 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1418425 00:21:39.936 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:39.936 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:39.936 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1418425 00:21:39.937 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:39.937 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:39.937 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1418425' 00:21:39.937 killing process with pid 1418425 00:21:39.937 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1418425 00:21:39.937 Received shutdown signal, test time was about 10.000000 seconds 00:21:39.937 00:21:39.937 Latency(us) 00:21:39.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.937 =================================================================================================================== 00:21:39.937 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:39.937 [2024-07-25 23:27:37.496830] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:39.937 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1418425 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.5oiS6RtMO9 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.5oiS6RtMO9 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.5oiS6RtMO9 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5oiS6RtMO9' 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1418558 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1418558 /var/tmp/bdevperf.sock 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1418558 ']' 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:40.195 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.195 [2024-07-25 23:27:37.733395] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:40.195 [2024-07-25 23:27:37.733485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418558 ] 00:21:40.195 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.195 [2024-07-25 23:27:37.766248] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:40.195 [2024-07-25 23:27:37.794372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.195 [2024-07-25 23:27:37.881065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.454 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:40.454 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:40.454 23:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5oiS6RtMO9 00:21:40.711 [2024-07-25 23:27:38.202783] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:40.712 [2024-07-25 23:27:38.202911] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:40.712 [2024-07-25 23:27:38.210041] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:40.712 [2024-07-25 23:27:38.210096] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:40.712 [2024-07-25 23:27:38.210149] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:40.712 [2024-07-25 23:27:38.210748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12348d0 (107): Transport endpoint is not connected 00:21:40.712 [2024-07-25 23:27:38.211736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12348d0 (9): Bad file descriptor 00:21:40.712 [2024-07-25 23:27:38.212735] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:40.712 [2024-07-25 23:27:38.212755] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:40.712 [2024-07-25 23:27:38.212772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:40.712 request: 00:21:40.712 { 00:21:40.712 "name": "TLSTEST", 00:21:40.712 "trtype": "tcp", 00:21:40.712 "traddr": "10.0.0.2", 00:21:40.712 "adrfam": "ipv4", 00:21:40.712 "trsvcid": "4420", 00:21:40.712 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:40.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:40.712 "prchk_reftag": false, 00:21:40.712 "prchk_guard": false, 00:21:40.712 "hdgst": false, 00:21:40.712 "ddgst": false, 00:21:40.712 "psk": "/tmp/tmp.5oiS6RtMO9", 00:21:40.712 "method": "bdev_nvme_attach_controller", 00:21:40.712 "req_id": 1 00:21:40.712 } 00:21:40.712 Got JSON-RPC error response 00:21:40.712 response: 00:21:40.712 { 00:21:40.712 "code": -5, 00:21:40.712 "message": "Input/output error" 00:21:40.712 } 00:21:40.712 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1418558 00:21:40.712 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1418558 ']' 00:21:40.712 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1418558 00:21:40.712 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:40.712 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:40.712 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1418558 00:21:40.712 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:40.712 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:40.712 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1418558' 00:21:40.712 killing process with pid 1418558 00:21:40.712 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1418558 00:21:40.712 Received shutdown signal, test time was about 10.000000 seconds 00:21:40.712 00:21:40.712 Latency(us) 00:21:40.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.712 =================================================================================================================== 00:21:40.712 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:40.712 [2024-07-25 23:27:38.263167] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:40.712 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1418558 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1418580 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1418580 /var/tmp/bdevperf.sock 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1418580 ']' 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:40.971 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.971 [2024-07-25 23:27:38.517739] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:40.971 [2024-07-25 23:27:38.517828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418580 ] 00:21:40.971 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.971 [2024-07-25 23:27:38.550756] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:40.971 [2024-07-25 23:27:38.579238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.971 [2024-07-25 23:27:38.670761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.229 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.229 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:41.229 23:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:41.488 [2024-07-25 23:27:39.052331] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:41.488 [2024-07-25 23:27:39.054035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcebde0 (9): Bad file descriptor 00:21:41.488 [2024-07-25 23:27:39.055032] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:41.488 [2024-07-25 23:27:39.055071] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:41.488 [2024-07-25 23:27:39.055089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:41.488 request: 00:21:41.488 { 00:21:41.488 "name": "TLSTEST", 00:21:41.488 "trtype": "tcp", 00:21:41.488 "traddr": "10.0.0.2", 00:21:41.488 "adrfam": "ipv4", 00:21:41.488 "trsvcid": "4420", 00:21:41.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.488 "prchk_reftag": false, 00:21:41.488 "prchk_guard": false, 00:21:41.488 "hdgst": false, 00:21:41.488 "ddgst": false, 00:21:41.488 "method": "bdev_nvme_attach_controller", 00:21:41.488 "req_id": 1 00:21:41.488 } 00:21:41.488 Got JSON-RPC error response 00:21:41.488 response: 00:21:41.488 { 00:21:41.488 "code": -5, 00:21:41.488 "message": "Input/output error" 00:21:41.488 } 00:21:41.488 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1418580 00:21:41.488 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1418580 ']' 00:21:41.488 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1418580 00:21:41.488 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:41.488 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.488 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1418580 00:21:41.488 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:41.489 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:41.489 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1418580' 00:21:41.489 killing process with pid 1418580 00:21:41.489 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1418580 00:21:41.489 Received shutdown signal, test time was about 10.000000 seconds 00:21:41.489 00:21:41.489 Latency(us) 00:21:41.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.489 =================================================================================================================== 00:21:41.489 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:41.489 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1418580 00:21:41.749 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:41.749 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:41.749 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:41.749 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:41.749 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:41.749 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 1415186 00:21:41.749 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1415186 ']' 00:21:41.749 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1415186 00:21:41.749 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:41.749 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.749 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1415186 00:21:41.749 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:41.749 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:41.749 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1415186' 00:21:41.749 killing process with pid 1415186 00:21:41.749 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1415186 00:21:41.749 [2024-07-25 23:27:39.353559] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:41.749 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1415186 00:21:42.008 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:42.008 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:42.008 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:42.008 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:42.008 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:42.008 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:42.008 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:42.008 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:42.008 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:42.008 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.s70BQFaoie 00:21:42.008 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:42.008 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.s70BQFaoie 00:21:42.008 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:42.008 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:42.008 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:42.008 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.009 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1418731 00:21:42.009 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:42.009 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1418731 00:21:42.009 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1418731 ']' 00:21:42.009 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.009 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:42.009 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.009 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:42.009 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.009 [2024-07-25 23:27:39.692304] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:42.009 [2024-07-25 23:27:39.692422] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.009 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.009 [2024-07-25 23:27:39.728805] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:42.267 [2024-07-25 23:27:39.756716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.267 [2024-07-25 23:27:39.844291] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.267 [2024-07-25 23:27:39.844363] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.267 [2024-07-25 23:27:39.844378] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.267 [2024-07-25 23:27:39.844413] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.267 [2024-07-25 23:27:39.844423] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.267 [2024-07-25 23:27:39.844463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.267 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:42.267 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:42.267 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.267 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:42.267 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.267 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.267 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.s70BQFaoie 00:21:42.267 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.s70BQFaoie 00:21:42.267 23:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:42.525 [2024-07-25 23:27:40.207362] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.525 23:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:43.090 23:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:43.090 [2024-07-25 23:27:40.780836] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:43.090 [2024-07-25 23:27:40.781088] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.090 23:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:43.347 malloc0 00:21:43.348 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:43.606 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s70BQFaoie 00:21:43.864 [2024-07-25 23:27:41.526489] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:43.864 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.s70BQFaoie 00:21:43.864 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:43.864 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:43.864 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:43.864 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.s70BQFaoie' 00:21:43.864 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:43.864 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1419011 00:21:43.864 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:43.864 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:43.864 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1419011 /var/tmp/bdevperf.sock 00:21:43.864 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1419011 ']' 00:21:43.864 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.864 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.864 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.864 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.864 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.864 [2024-07-25 23:27:41.580778] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:43.864 [2024-07-25 23:27:41.580866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419011 ] 00:21:44.123 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.123 [2024-07-25 23:27:41.613961] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:44.123 [2024-07-25 23:27:41.641623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.123 [2024-07-25 23:27:41.728163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.123 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.123 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:44.123 23:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s70BQFaoie 00:21:44.381 [2024-07-25 23:27:42.069949] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.381 [2024-07-25 23:27:42.070102] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:44.640 TLSTESTn1 00:21:44.640 23:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:44.641 Running I/O for 10 seconds... 00:21:54.654 00:21:54.654 Latency(us) 00:21:54.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.654 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:54.654 Verification LBA range: start 0x0 length 0x2000 00:21:54.654 TLSTESTn1 : 10.02 3597.32 14.05 0.00 0.00 35515.07 10291.58 42719.76 00:21:54.654 =================================================================================================================== 00:21:54.654 Total : 3597.32 14.05 0.00 0.00 35515.07 10291.58 42719.76 00:21:54.654 0 00:21:54.654 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:54.654 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1419011 00:21:54.654 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1419011 ']' 00:21:54.654 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1419011 00:21:54.654 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:54.654 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:54.654 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1419011 00:21:54.654 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:54.654 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:54.654 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1419011' 00:21:54.654 killing process with pid 1419011 00:21:54.654 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1419011 00:21:54.654 Received shutdown signal, test time was about 10.000000 seconds 00:21:54.654 00:21:54.654 Latency(us) 00:21:54.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.654 =================================================================================================================== 00:21:54.654 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.654 [2024-07-25 23:27:52.359949] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:54.654 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1419011 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.s70BQFaoie 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.s70BQFaoie 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.s70BQFaoie 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.s70BQFaoie 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.s70BQFaoie' 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1420326 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1420326 /var/tmp/bdevperf.sock 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1420326 ']' 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:54.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.912 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.912 [2024-07-25 23:27:52.628855] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:54.912 [2024-07-25 23:27:52.628940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420326 ] 00:21:55.170 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.170 [2024-07-25 23:27:52.661100] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:55.170 [2024-07-25 23:27:52.687859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.170 [2024-07-25 23:27:52.770155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.170 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:55.171 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:55.171 23:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s70BQFaoie 00:21:55.428 [2024-07-25 23:27:53.152936] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:55.428 [2024-07-25 23:27:53.153002] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:55.428 [2024-07-25 23:27:53.153016] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.s70BQFaoie 00:21:55.686 request: 00:21:55.686 { 00:21:55.686 "name": "TLSTEST", 00:21:55.686 "trtype": "tcp", 00:21:55.686 "traddr": "10.0.0.2", 00:21:55.686 "adrfam": "ipv4", 00:21:55.686 "trsvcid": "4420", 00:21:55.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:55.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:55.686 "prchk_reftag": false, 00:21:55.686 "prchk_guard": false, 00:21:55.686 "hdgst": false, 00:21:55.686 "ddgst": false, 00:21:55.686 "psk": "/tmp/tmp.s70BQFaoie", 00:21:55.686 "method": "bdev_nvme_attach_controller", 00:21:55.686 "req_id": 1 00:21:55.686 } 00:21:55.686 Got JSON-RPC error response 00:21:55.686 response: 00:21:55.686 { 00:21:55.686 "code": -1, 00:21:55.686 "message": "Operation not permitted" 00:21:55.686 } 00:21:55.686 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1420326 00:21:55.686 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1420326 ']' 00:21:55.686 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1420326 00:21:55.686 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:55.686 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:55.686 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1420326 00:21:55.686 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:55.686 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:55.686 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1420326' 00:21:55.686 killing process with pid 1420326 00:21:55.686 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1420326 00:21:55.686 Received shutdown signal, test time was about 10.000000 seconds 00:21:55.686 00:21:55.686 Latency(us) 00:21:55.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.686 =================================================================================================================== 00:21:55.686 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:55.686 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1420326 00:21:55.944 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:55.944 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:55.944 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:55.944 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:55.944 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:55.944 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 1418731 00:21:55.944 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1418731 ']' 00:21:55.944 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1418731 00:21:55.944 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:55.944 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:55.944 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1418731 00:21:55.944 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:55.944 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:55.944 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1418731' 00:21:55.944 killing process with pid 1418731 00:21:55.944 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1418731 00:21:55.944 [2024-07-25 23:27:53.445263] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:55.944 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1418731 00:21:56.201 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:56.201 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:56.201 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:56.201 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.201 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1420473 00:21:56.201 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:56.201 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1420473 00:21:56.201 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1420473 ']' 00:21:56.201 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.201 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:56.201 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.201 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:56.201 23:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.201 [2024-07-25 23:27:53.740327] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:56.202 [2024-07-25 23:27:53.740417] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.202 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.202 [2024-07-25 23:27:53.776057] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:56.202 [2024-07-25 23:27:53.807400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.202 [2024-07-25 23:27:53.898066] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.202 [2024-07-25 23:27:53.898142] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.202 [2024-07-25 23:27:53.898171] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.202 [2024-07-25 23:27:53.898184] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.202 [2024-07-25 23:27:53.898196] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.202 [2024-07-25 23:27:53.898232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.459 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:56.459 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:56.459 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:56.459 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:56.459 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.459 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.459 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.s70BQFaoie 00:21:56.459 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:56.459 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.s70BQFaoie 00:21:56.459 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:21:56.459 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:56.459 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:21:56.459 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:56.459 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.s70BQFaoie 00:21:56.459 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.s70BQFaoie 00:21:56.459 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:56.719 [2024-07-25 23:27:54.250709] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.719 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:56.978 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:57.235 [2024-07-25 23:27:54.760110] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:57.235 [2024-07-25 23:27:54.760385] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.235 23:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:57.493 malloc0 00:21:57.493 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:57.751 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s70BQFaoie 00:21:58.009 [2024-07-25 23:27:55.501561] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:58.009 [2024-07-25 23:27:55.501601] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:58.009 [2024-07-25 23:27:55.501641] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:58.009 request: 00:21:58.009 { 00:21:58.009 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.009 "host": "nqn.2016-06.io.spdk:host1", 00:21:58.009 "psk": "/tmp/tmp.s70BQFaoie", 00:21:58.009 "method": "nvmf_subsystem_add_host", 00:21:58.009 "req_id": 1 00:21:58.009 } 00:21:58.009 Got JSON-RPC error response 00:21:58.009 response: 00:21:58.009 { 00:21:58.009 "code": -32603, 00:21:58.009 "message": "Internal error" 00:21:58.009 } 00:21:58.009 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:58.009 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:58.009 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:58.009 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:58.009 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 1420473 00:21:58.009 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1420473 ']' 00:21:58.009 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1420473 00:21:58.010 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:58.010 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:58.010 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1420473 00:21:58.010 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:58.010 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:58.010 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1420473' 00:21:58.010 killing process with pid 1420473 00:21:58.010 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1420473 00:21:58.010 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1420473 00:21:58.270 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.s70BQFaoie 00:21:58.270 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:58.270 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:58.270 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:58.270 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.270 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1420677 00:21:58.270 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:58.270 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1420677 00:21:58.270 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1420677 ']' 00:21:58.270 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.270 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:58.270 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.270 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:58.270 23:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.270 [2024-07-25 23:27:55.848118] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:21:58.270 [2024-07-25 23:27:55.848214] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.270 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.270 [2024-07-25 23:27:55.891120] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:58.270 [2024-07-25 23:27:55.920021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.528 [2024-07-25 23:27:56.011651] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.528 [2024-07-25 23:27:56.011709] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.528 [2024-07-25 23:27:56.011723] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.528 [2024-07-25 23:27:56.011735] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.528 [2024-07-25 23:27:56.011745] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.528 [2024-07-25 23:27:56.011771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.528 23:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:58.528 23:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:58.528 23:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:58.528 23:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:58.528 23:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.529 23:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.529 23:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.s70BQFaoie 00:21:58.529 23:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.s70BQFaoie 00:21:58.529 23:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:58.786 [2024-07-25 23:27:56.428568] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.786 23:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:59.044 23:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:59.303 [2024-07-25 23:27:57.018181] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:59.303 [2024-07-25 23:27:57.018453] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.561 23:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:59.819 malloc0 00:21:59.819 23:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:00.077 23:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s70BQFaoie 00:22:00.335 [2024-07-25 23:27:57.803407] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:00.335 23:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1420929 00:22:00.335 23:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:00.335 23:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:00.335 23:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1420929 /var/tmp/bdevperf.sock 00:22:00.335 23:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1420929 ']' 00:22:00.335 23:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.335 23:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.335 23:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.335 23:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.335 23:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.335 [2024-07-25 23:27:57.860929] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:00.335 [2024-07-25 23:27:57.861012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420929 ] 00:22:00.335 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.335 [2024-07-25 23:27:57.892839] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:00.335 [2024-07-25 23:27:57.920204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.335 [2024-07-25 23:27:58.009870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.593 23:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:00.593 23:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:00.593 23:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s70BQFaoie 00:22:00.850 [2024-07-25 23:27:58.339932] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:00.850 [2024-07-25 23:27:58.340069] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:00.850 TLSTESTn1 00:22:00.850 23:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:01.107 23:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:01.107 "subsystems": [ 00:22:01.107 { 00:22:01.107 "subsystem": "keyring", 00:22:01.107 "config": [] 00:22:01.107 }, 00:22:01.107 { 00:22:01.107 "subsystem": "iobuf", 00:22:01.107 "config": [ 00:22:01.107 { 00:22:01.107 "method": "iobuf_set_options", 00:22:01.107 "params": { 00:22:01.107 "small_pool_count": 8192, 00:22:01.107 "large_pool_count": 1024, 00:22:01.107 "small_bufsize": 8192, 00:22:01.107 "large_bufsize": 135168 00:22:01.107 } 00:22:01.107 } 00:22:01.107 ] 00:22:01.107 }, 00:22:01.107 { 00:22:01.107 "subsystem": "sock", 00:22:01.107 "config": [ 00:22:01.107 { 00:22:01.107 "method": "sock_set_default_impl", 00:22:01.107 "params": { 00:22:01.107 "impl_name": "posix" 00:22:01.107 } 00:22:01.107 }, 00:22:01.107 { 00:22:01.107 "method": "sock_impl_set_options", 00:22:01.107 "params": { 00:22:01.107 "impl_name": "ssl", 00:22:01.107 "recv_buf_size": 4096, 00:22:01.107 "send_buf_size": 4096, 00:22:01.107 "enable_recv_pipe": true, 00:22:01.107 "enable_quickack": false, 00:22:01.107 "enable_placement_id": 0, 00:22:01.107 "enable_zerocopy_send_server": true, 00:22:01.107 "enable_zerocopy_send_client": false, 00:22:01.107 "zerocopy_threshold": 0, 00:22:01.107 "tls_version": 0, 00:22:01.107 "enable_ktls": false 00:22:01.107 } 00:22:01.107 }, 00:22:01.107 { 00:22:01.107 "method": "sock_impl_set_options", 00:22:01.107 "params": { 00:22:01.107 "impl_name": "posix", 00:22:01.107 "recv_buf_size": 2097152, 00:22:01.107 "send_buf_size": 2097152, 00:22:01.107 "enable_recv_pipe": true, 00:22:01.107 "enable_quickack": false, 00:22:01.107 "enable_placement_id": 0, 00:22:01.107 "enable_zerocopy_send_server": true, 00:22:01.107 "enable_zerocopy_send_client": false, 00:22:01.107 "zerocopy_threshold": 0, 00:22:01.107 "tls_version": 0, 00:22:01.107 "enable_ktls": false 00:22:01.107 } 00:22:01.107 } 00:22:01.107 ] 00:22:01.107 }, 00:22:01.107 { 00:22:01.107 "subsystem": "vmd", 00:22:01.107 "config": [] 00:22:01.107 }, 00:22:01.107 { 00:22:01.107 "subsystem": "accel", 00:22:01.107 "config": [ 00:22:01.107 { 00:22:01.108 "method": "accel_set_options", 00:22:01.108 "params": { 00:22:01.108 "small_cache_size": 128, 00:22:01.108 "large_cache_size": 16, 00:22:01.108 "task_count": 2048, 00:22:01.108 "sequence_count": 2048, 00:22:01.108 "buf_count": 2048 00:22:01.108 } 00:22:01.108 } 00:22:01.108 ] 00:22:01.108 }, 00:22:01.108 { 00:22:01.108 "subsystem": "bdev", 00:22:01.108 "config": [ 00:22:01.108 { 00:22:01.108 "method": "bdev_set_options", 00:22:01.108 "params": { 00:22:01.108 "bdev_io_pool_size": 65535, 00:22:01.108 "bdev_io_cache_size": 256, 00:22:01.108 "bdev_auto_examine": true, 00:22:01.108 "iobuf_small_cache_size": 128, 00:22:01.108 "iobuf_large_cache_size": 16 00:22:01.108 } 00:22:01.108 }, 00:22:01.108 { 00:22:01.108 "method": "bdev_raid_set_options", 00:22:01.108 "params": { 00:22:01.108 "process_window_size_kb": 1024, 00:22:01.108 "process_max_bandwidth_mb_sec": 0 00:22:01.108 } 00:22:01.108 }, 00:22:01.108 { 00:22:01.108 "method": "bdev_iscsi_set_options", 00:22:01.108 "params": { 00:22:01.108 "timeout_sec": 30 00:22:01.108 } 00:22:01.108 }, 00:22:01.108 { 00:22:01.108 "method": "bdev_nvme_set_options", 00:22:01.108 "params": { 00:22:01.108 "action_on_timeout": "none", 00:22:01.108 "timeout_us": 0, 00:22:01.108 "timeout_admin_us": 0, 00:22:01.108 "keep_alive_timeout_ms": 10000, 00:22:01.108 "arbitration_burst": 0, 00:22:01.108 "low_priority_weight": 0, 00:22:01.108 "medium_priority_weight": 0, 00:22:01.108 "high_priority_weight": 0, 00:22:01.108 "nvme_adminq_poll_period_us": 10000, 00:22:01.108 "nvme_ioq_poll_period_us": 0, 00:22:01.108 "io_queue_requests": 0, 00:22:01.108 "delay_cmd_submit": true, 00:22:01.108 "transport_retry_count": 4, 00:22:01.108 "bdev_retry_count": 3, 00:22:01.108 "transport_ack_timeout": 0, 00:22:01.108 "ctrlr_loss_timeout_sec": 0, 00:22:01.108 "reconnect_delay_sec": 0, 00:22:01.108 "fast_io_fail_timeout_sec": 0, 00:22:01.108 "disable_auto_failback": false, 00:22:01.108 "generate_uuids": false, 00:22:01.108 "transport_tos": 0, 00:22:01.108 "nvme_error_stat": false, 00:22:01.108 "rdma_srq_size": 0, 00:22:01.108 "io_path_stat": false, 00:22:01.108 "allow_accel_sequence": false, 00:22:01.108 "rdma_max_cq_size": 0, 00:22:01.108 "rdma_cm_event_timeout_ms": 0, 00:22:01.108 "dhchap_digests": [ 00:22:01.108 "sha256", 00:22:01.108 "sha384", 00:22:01.108 "sha512" 00:22:01.108 ], 00:22:01.108 "dhchap_dhgroups": [ 00:22:01.108 "null", 00:22:01.108 "ffdhe2048", 00:22:01.108 "ffdhe3072", 00:22:01.108 "ffdhe4096", 00:22:01.108 "ffdhe6144", 00:22:01.108 "ffdhe8192" 00:22:01.108 ] 00:22:01.108 } 00:22:01.108 }, 00:22:01.108 { 00:22:01.108 "method": "bdev_nvme_set_hotplug", 00:22:01.108 "params": { 00:22:01.108 "period_us": 100000, 00:22:01.108 "enable": false 00:22:01.108 } 00:22:01.108 }, 00:22:01.108 { 00:22:01.108 "method": "bdev_malloc_create", 00:22:01.108 "params": { 00:22:01.108 "name": "malloc0", 00:22:01.108 "num_blocks": 8192, 00:22:01.108 "block_size": 4096, 00:22:01.108 "physical_block_size": 4096, 00:22:01.108 "uuid": "d7907031-f3bb-4be5-81b4-2a43c6184069", 00:22:01.108 "optimal_io_boundary": 0, 00:22:01.108 "md_size": 0, 00:22:01.108 "dif_type": 0, 00:22:01.108 "dif_is_head_of_md": false, 00:22:01.108 "dif_pi_format": 0 00:22:01.108 } 00:22:01.108 }, 00:22:01.108 { 00:22:01.108 "method": "bdev_wait_for_examine" 00:22:01.108 } 00:22:01.108 ] 00:22:01.108 }, 00:22:01.108 { 00:22:01.108 "subsystem": "nbd", 00:22:01.108 "config": [] 00:22:01.108 }, 00:22:01.108 { 00:22:01.108 "subsystem": "scheduler", 00:22:01.108 "config": [ 00:22:01.108 { 00:22:01.108 "method": "framework_set_scheduler", 00:22:01.108 "params": { 00:22:01.108 "name": "static" 00:22:01.108 } 00:22:01.108 } 00:22:01.108 ] 00:22:01.108 }, 00:22:01.108 { 00:22:01.108 "subsystem": "nvmf", 00:22:01.108 "config": [ 00:22:01.108 { 00:22:01.108 "method": "nvmf_set_config", 00:22:01.108 "params": { 00:22:01.108 "discovery_filter": "match_any", 00:22:01.108 "admin_cmd_passthru": { 00:22:01.108 "identify_ctrlr": false 00:22:01.108 } 00:22:01.108 } 00:22:01.108 }, 00:22:01.108 { 00:22:01.108 "method": "nvmf_set_max_subsystems", 00:22:01.108 "params": { 00:22:01.108 "max_subsystems": 1024 00:22:01.108 } 00:22:01.108 }, 00:22:01.108 { 00:22:01.108 "method": "nvmf_set_crdt", 00:22:01.108 "params": { 00:22:01.108 "crdt1": 0, 00:22:01.108 "crdt2": 0, 00:22:01.108 "crdt3": 0 00:22:01.108 } 00:22:01.108 }, 00:22:01.108 { 00:22:01.108 "method": "nvmf_create_transport", 00:22:01.108 "params": { 00:22:01.108 "trtype": "TCP", 00:22:01.108 "max_queue_depth": 128, 00:22:01.108 "max_io_qpairs_per_ctrlr": 127, 00:22:01.108 "in_capsule_data_size": 4096, 00:22:01.108 "max_io_size": 131072, 00:22:01.108 "io_unit_size": 131072, 00:22:01.108 "max_aq_depth": 128, 00:22:01.108 "num_shared_buffers": 511, 00:22:01.108 "buf_cache_size": 4294967295, 00:22:01.108 "dif_insert_or_strip": false, 00:22:01.108 "zcopy": false, 00:22:01.108 "c2h_success": false, 00:22:01.108 "sock_priority": 0, 00:22:01.108 "abort_timeout_sec": 1, 00:22:01.108 "ack_timeout": 0, 00:22:01.108 "data_wr_pool_size": 0 00:22:01.108 } 00:22:01.108 }, 00:22:01.108 { 00:22:01.108 "method": "nvmf_create_subsystem", 00:22:01.108 "params": { 00:22:01.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.108 "allow_any_host": false, 00:22:01.108 "serial_number": "SPDK00000000000001", 00:22:01.108 "model_number": "SPDK bdev Controller", 00:22:01.108 "max_namespaces": 10, 00:22:01.108 "min_cntlid": 1, 00:22:01.108 "max_cntlid": 65519, 00:22:01.108 "ana_reporting": false 00:22:01.108 } 00:22:01.108 }, 00:22:01.108 { 00:22:01.108 "method": "nvmf_subsystem_add_host", 00:22:01.108 "params": { 00:22:01.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.108 "host": "nqn.2016-06.io.spdk:host1", 00:22:01.108 "psk": "/tmp/tmp.s70BQFaoie" 00:22:01.108 } 00:22:01.108 }, 00:22:01.108 { 00:22:01.108 "method": "nvmf_subsystem_add_ns", 00:22:01.108 "params": { 00:22:01.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.108 "namespace": { 00:22:01.108 "nsid": 1, 00:22:01.108 "bdev_name": "malloc0", 00:22:01.108 "nguid": "D7907031F3BB4BE581B42A43C6184069", 00:22:01.108 "uuid": "d7907031-f3bb-4be5-81b4-2a43c6184069", 00:22:01.108 "no_auto_visible": false 00:22:01.108 } 00:22:01.108 } 00:22:01.108 }, 00:22:01.108 { 00:22:01.108 "method": "nvmf_subsystem_add_listener", 00:22:01.108 "params": { 00:22:01.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.109 "listen_address": { 00:22:01.109 "trtype": "TCP", 00:22:01.109 "adrfam": "IPv4", 00:22:01.109 "traddr": "10.0.0.2", 00:22:01.109 "trsvcid": "4420" 00:22:01.109 }, 00:22:01.109 "secure_channel": true 00:22:01.109 } 00:22:01.109 } 00:22:01.109 ] 00:22:01.109 } 00:22:01.109 ] 00:22:01.109 }' 00:22:01.109 23:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:01.367 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:01.367 "subsystems": [ 00:22:01.367 { 00:22:01.367 "subsystem": "keyring", 00:22:01.367 "config": [] 00:22:01.367 }, 00:22:01.367 { 00:22:01.367 "subsystem": "iobuf", 00:22:01.367 "config": [ 00:22:01.367 { 00:22:01.367 "method": "iobuf_set_options", 00:22:01.367 "params": { 00:22:01.367 "small_pool_count": 8192, 00:22:01.367 "large_pool_count": 1024, 00:22:01.367 "small_bufsize": 8192, 00:22:01.367 "large_bufsize": 135168 00:22:01.367 } 00:22:01.367 } 00:22:01.367 ] 00:22:01.367 }, 00:22:01.367 { 00:22:01.367 "subsystem": "sock", 00:22:01.367 "config": [ 00:22:01.367 { 00:22:01.367 "method": "sock_set_default_impl", 00:22:01.367 "params": { 00:22:01.367 "impl_name": "posix" 00:22:01.367 } 00:22:01.367 }, 00:22:01.367 { 00:22:01.367 "method": "sock_impl_set_options", 00:22:01.367 "params": { 00:22:01.367 "impl_name": "ssl", 00:22:01.367 "recv_buf_size": 4096, 00:22:01.367 "send_buf_size": 4096, 00:22:01.367 "enable_recv_pipe": true, 00:22:01.367 "enable_quickack": false, 00:22:01.367 "enable_placement_id": 0, 00:22:01.367 "enable_zerocopy_send_server": true, 00:22:01.367 "enable_zerocopy_send_client": false, 00:22:01.367 "zerocopy_threshold": 0, 00:22:01.367 "tls_version": 0, 00:22:01.367 "enable_ktls": false 00:22:01.367 } 00:22:01.367 }, 00:22:01.367 { 00:22:01.367 "method": "sock_impl_set_options", 00:22:01.367 "params": { 00:22:01.367 "impl_name": "posix", 00:22:01.367 "recv_buf_size": 2097152, 00:22:01.367 "send_buf_size": 2097152, 00:22:01.367 "enable_recv_pipe": true, 00:22:01.367 "enable_quickack": false, 00:22:01.367 "enable_placement_id": 0, 00:22:01.367 "enable_zerocopy_send_server": true, 00:22:01.367 "enable_zerocopy_send_client": false, 00:22:01.367 "zerocopy_threshold": 0, 00:22:01.367 "tls_version": 0, 00:22:01.367 "enable_ktls": false 00:22:01.367 } 00:22:01.367 } 00:22:01.367 ] 00:22:01.367 }, 00:22:01.367 { 00:22:01.367 "subsystem": "vmd", 00:22:01.367 "config": [] 00:22:01.367 }, 00:22:01.367 { 00:22:01.367 "subsystem": "accel", 00:22:01.367 "config": [ 00:22:01.367 { 00:22:01.367 "method": "accel_set_options", 00:22:01.367 "params": { 00:22:01.367 "small_cache_size": 128, 00:22:01.367 "large_cache_size": 16, 00:22:01.367 "task_count": 2048, 00:22:01.367 "sequence_count": 2048, 00:22:01.367 "buf_count": 2048 00:22:01.367 } 00:22:01.367 } 00:22:01.367 ] 00:22:01.367 }, 00:22:01.367 { 00:22:01.367 "subsystem": "bdev", 00:22:01.367 "config": [ 00:22:01.367 { 00:22:01.367 "method": "bdev_set_options", 00:22:01.367 "params": { 00:22:01.367 "bdev_io_pool_size": 65535, 00:22:01.367 "bdev_io_cache_size": 256, 00:22:01.367 "bdev_auto_examine": true, 00:22:01.367 "iobuf_small_cache_size": 128, 00:22:01.367 "iobuf_large_cache_size": 16 00:22:01.367 } 00:22:01.367 }, 00:22:01.367 { 00:22:01.367 "method": "bdev_raid_set_options", 00:22:01.367 "params": { 00:22:01.367 "process_window_size_kb": 1024, 00:22:01.367 "process_max_bandwidth_mb_sec": 0 00:22:01.367 } 00:22:01.367 }, 00:22:01.367 { 00:22:01.367 "method": "bdev_iscsi_set_options", 00:22:01.367 "params": { 00:22:01.367 "timeout_sec": 30 00:22:01.367 } 00:22:01.367 }, 00:22:01.367 { 00:22:01.367 "method": "bdev_nvme_set_options", 00:22:01.367 "params": { 00:22:01.367 "action_on_timeout": "none", 00:22:01.367 "timeout_us": 0, 00:22:01.367 "timeout_admin_us": 0, 00:22:01.367 "keep_alive_timeout_ms": 10000, 00:22:01.367 "arbitration_burst": 0, 00:22:01.367 "low_priority_weight": 0, 00:22:01.367 "medium_priority_weight": 0, 00:22:01.367 "high_priority_weight": 0, 00:22:01.367 "nvme_adminq_poll_period_us": 10000, 00:22:01.367 "nvme_ioq_poll_period_us": 0, 00:22:01.367 "io_queue_requests": 512, 00:22:01.367 "delay_cmd_submit": true, 00:22:01.367 "transport_retry_count": 4, 00:22:01.367 "bdev_retry_count": 3, 00:22:01.367 "transport_ack_timeout": 0, 00:22:01.367 "ctrlr_loss_timeout_sec": 0, 00:22:01.367 "reconnect_delay_sec": 0, 00:22:01.367 "fast_io_fail_timeout_sec": 0, 00:22:01.367 "disable_auto_failback": false, 00:22:01.367 "generate_uuids": false, 00:22:01.367 "transport_tos": 0, 00:22:01.367 "nvme_error_stat": false, 00:22:01.367 "rdma_srq_size": 0, 00:22:01.367 "io_path_stat": false, 00:22:01.367 "allow_accel_sequence": false, 00:22:01.367 "rdma_max_cq_size": 0, 00:22:01.368 "rdma_cm_event_timeout_ms": 0, 00:22:01.368 "dhchap_digests": [ 00:22:01.368 "sha256", 00:22:01.368 "sha384", 00:22:01.368 "sha512" 00:22:01.368 ], 00:22:01.368 "dhchap_dhgroups": [ 00:22:01.368 "null", 00:22:01.368 "ffdhe2048", 00:22:01.368 "ffdhe3072", 00:22:01.368 "ffdhe4096", 00:22:01.368 "ffdhe6144", 00:22:01.368 "ffdhe8192" 00:22:01.368 ] 00:22:01.368 } 00:22:01.368 }, 00:22:01.368 { 00:22:01.368 "method": "bdev_nvme_attach_controller", 00:22:01.368 "params": { 00:22:01.368 "name": "TLSTEST", 00:22:01.368 "trtype": "TCP", 00:22:01.368 "adrfam": "IPv4", 00:22:01.368 "traddr": "10.0.0.2", 00:22:01.368 "trsvcid": "4420", 00:22:01.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.368 "prchk_reftag": false, 00:22:01.368 "prchk_guard": false, 00:22:01.368 "ctrlr_loss_timeout_sec": 0, 00:22:01.368 "reconnect_delay_sec": 0, 00:22:01.368 "fast_io_fail_timeout_sec": 0, 00:22:01.368 "psk": "/tmp/tmp.s70BQFaoie", 00:22:01.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.368 "hdgst": false, 00:22:01.368 "ddgst": false 00:22:01.368 } 00:22:01.368 }, 00:22:01.368 { 00:22:01.368 "method": "bdev_nvme_set_hotplug", 00:22:01.368 "params": { 00:22:01.368 "period_us": 100000, 00:22:01.368 "enable": false 00:22:01.368 } 00:22:01.368 }, 00:22:01.368 { 00:22:01.368 "method": "bdev_wait_for_examine" 00:22:01.368 } 00:22:01.368 ] 00:22:01.368 }, 00:22:01.368 { 00:22:01.368 "subsystem": "nbd", 00:22:01.368 "config": [] 00:22:01.368 } 00:22:01.368 ] 00:22:01.368 }' 00:22:01.368 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 1420929 00:22:01.368 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1420929 ']' 00:22:01.368 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1420929 00:22:01.368 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:01.368 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:01.368 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1420929 00:22:01.626 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:01.626 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:01.626 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1420929' 00:22:01.626 killing process with pid 1420929 00:22:01.626 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1420929 00:22:01.626 Received shutdown signal, test time was about 10.000000 seconds 00:22:01.626 00:22:01.626 Latency(us) 00:22:01.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.626 =================================================================================================================== 00:22:01.626 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:01.626 [2024-07-25 23:27:59.098198] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:01.626 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1420929 00:22:01.626 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 1420677 00:22:01.626 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1420677 ']' 00:22:01.626 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1420677 00:22:01.626 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:01.626 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:01.626 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1420677 00:22:01.626 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:01.626 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:01.626 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1420677' 00:22:01.626 killing process with pid 1420677 00:22:01.626 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1420677 00:22:01.626 [2024-07-25 23:27:59.342253] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:01.626 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1420677 00:22:01.885 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:01.885 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:01.885 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:01.885 "subsystems": [ 00:22:01.885 { 00:22:01.885 "subsystem": "keyring", 00:22:01.885 "config": [] 00:22:01.885 }, 00:22:01.885 { 00:22:01.885 "subsystem": "iobuf", 00:22:01.885 "config": [ 00:22:01.885 { 00:22:01.885 "method": "iobuf_set_options", 00:22:01.885 "params": { 00:22:01.885 "small_pool_count": 8192, 00:22:01.885 "large_pool_count": 1024, 00:22:01.885 "small_bufsize": 8192, 00:22:01.885 "large_bufsize": 135168 00:22:01.885 } 00:22:01.885 } 00:22:01.885 ] 00:22:01.885 }, 00:22:01.885 { 00:22:01.885 "subsystem": "sock", 00:22:01.885 "config": [ 00:22:01.885 { 00:22:01.885 "method": "sock_set_default_impl", 00:22:01.885 "params": { 00:22:01.885 "impl_name": "posix" 00:22:01.885 } 00:22:01.885 }, 00:22:01.885 { 00:22:01.885 "method": "sock_impl_set_options", 00:22:01.885 "params": { 00:22:01.885 "impl_name": "ssl", 00:22:01.885 "recv_buf_size": 4096, 00:22:01.885 "send_buf_size": 4096, 00:22:01.885 "enable_recv_pipe": true, 00:22:01.885 "enable_quickack": false, 00:22:01.885 "enable_placement_id": 0, 00:22:01.885 "enable_zerocopy_send_server": true, 00:22:01.885 "enable_zerocopy_send_client": false, 00:22:01.885 "zerocopy_threshold": 0, 00:22:01.885 "tls_version": 0, 00:22:01.885 "enable_ktls": false 00:22:01.885 } 00:22:01.885 }, 00:22:01.885 { 00:22:01.885 "method": "sock_impl_set_options", 00:22:01.885 "params": { 00:22:01.885 "impl_name": "posix", 00:22:01.885 "recv_buf_size": 2097152, 00:22:01.885 "send_buf_size": 2097152, 00:22:01.885 "enable_recv_pipe": true, 00:22:01.885 "enable_quickack": false, 00:22:01.885 "enable_placement_id": 0, 00:22:01.885 "enable_zerocopy_send_server": true, 00:22:01.885 "enable_zerocopy_send_client": false, 00:22:01.885 "zerocopy_threshold": 0, 00:22:01.885 "tls_version": 0, 00:22:01.885 "enable_ktls": false 00:22:01.886 } 00:22:01.886 } 00:22:01.886 ] 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "subsystem": "vmd", 00:22:01.886 "config": [] 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "subsystem": "accel", 00:22:01.886 "config": [ 00:22:01.886 { 00:22:01.886 "method": "accel_set_options", 00:22:01.886 "params": { 00:22:01.886 "small_cache_size": 128, 00:22:01.886 "large_cache_size": 16, 00:22:01.886 "task_count": 2048, 00:22:01.886 "sequence_count": 2048, 00:22:01.886 "buf_count": 2048 00:22:01.886 } 00:22:01.886 } 00:22:01.886 ] 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "subsystem": "bdev", 00:22:01.886 "config": [ 00:22:01.886 { 00:22:01.886 "method": "bdev_set_options", 00:22:01.886 "params": { 00:22:01.886 "bdev_io_pool_size": 65535, 00:22:01.886 "bdev_io_cache_size": 256, 00:22:01.886 "bdev_auto_examine": true, 00:22:01.886 "iobuf_small_cache_size": 128, 00:22:01.886 "iobuf_large_cache_size": 16 00:22:01.886 } 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "method": "bdev_raid_set_options", 00:22:01.886 "params": { 00:22:01.886 "process_window_size_kb": 1024, 00:22:01.886 "process_max_bandwidth_mb_sec": 0 00:22:01.886 } 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "method": "bdev_iscsi_set_options", 00:22:01.886 "params": { 00:22:01.886 "timeout_sec": 30 00:22:01.886 } 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "method": "bdev_nvme_set_options", 00:22:01.886 "params": { 00:22:01.886 "action_on_timeout": "none", 00:22:01.886 "timeout_us": 0, 00:22:01.886 "timeout_admin_us": 0, 00:22:01.886 "keep_alive_timeout_ms": 10000, 00:22:01.886 "arbitration_burst": 0, 00:22:01.886 "low_priority_weight": 0, 00:22:01.886 "medium_priority_weight": 0, 00:22:01.886 "high_priority_weight": 0, 00:22:01.886 "nvme_adminq_poll_period_us": 10000, 00:22:01.886 "nvme_ioq_poll_period_us": 0, 00:22:01.886 "io_queue_requests": 0, 00:22:01.886 "delay_cmd_submit": true, 00:22:01.886 "transport_retry_count": 4, 00:22:01.886 "bdev_retry_count": 3, 00:22:01.886 "transport_ack_timeout": 0, 00:22:01.886 "ctrlr_loss_timeout_sec": 0, 00:22:01.886 "reconnect_delay_sec": 0, 00:22:01.886 "fast_io_fail_timeout_sec": 0, 00:22:01.886 "disable_auto_failback": false, 00:22:01.886 "generate_uuids": false, 00:22:01.886 "transport_tos": 0, 00:22:01.886 "nvme_error_stat": false, 00:22:01.886 "rdma_srq_size": 0, 00:22:01.886 "io_path_stat": false, 00:22:01.886 "allow_accel_sequence": false, 00:22:01.886 "rdma_max_cq_size": 0, 00:22:01.886 "rdma_cm_event_timeout_ms": 0, 00:22:01.886 "dhchap_digests": [ 00:22:01.886 "sha256", 00:22:01.886 "sha384", 00:22:01.886 "sha512" 00:22:01.886 ], 00:22:01.886 "dhchap_dhgroups": [ 00:22:01.886 "null", 00:22:01.886 "ffdhe2048", 00:22:01.886 "ffdhe3072", 00:22:01.886 "ffdhe4096", 00:22:01.886 "ffdhe6144", 00:22:01.886 "ffdhe8192" 00:22:01.886 ] 00:22:01.886 } 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "method": "bdev_nvme_set_hotplug", 00:22:01.886 "params": { 00:22:01.886 "period_us": 100000, 00:22:01.886 "enable": false 00:22:01.886 } 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "method": "bdev_malloc_create", 00:22:01.886 "params": { 00:22:01.886 "name": "malloc0", 00:22:01.886 "num_blocks": 8192, 00:22:01.886 "block_size": 4096, 00:22:01.886 "physical_block_size": 4096, 00:22:01.886 "uuid": "d7907031-f3bb-4be5-81b4-2a43c6184069", 00:22:01.886 "optimal_io_boundary": 0, 00:22:01.886 "md_size": 0, 00:22:01.886 "dif_type": 0, 00:22:01.886 "dif_is_head_of_md": false, 00:22:01.886 "dif_pi_format": 0 00:22:01.886 } 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "method": "bdev_wait_for_examine" 00:22:01.886 } 00:22:01.886 ] 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "subsystem": "nbd", 00:22:01.886 "config": [] 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "subsystem": "scheduler", 00:22:01.886 "config": [ 00:22:01.886 { 00:22:01.886 "method": "framework_set_scheduler", 00:22:01.886 "params": { 00:22:01.886 "name": "static" 00:22:01.886 } 00:22:01.886 } 00:22:01.886 ] 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "subsystem": "nvmf", 00:22:01.886 "config": [ 00:22:01.886 { 00:22:01.886 "method": "nvmf_set_config", 00:22:01.886 "params": { 00:22:01.886 "discovery_filter": "match_any", 00:22:01.886 "admin_cmd_passthru": { 00:22:01.886 "identify_ctrlr": false 00:22:01.886 } 00:22:01.886 } 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "method": "nvmf_set_max_subsystems", 00:22:01.886 "params": { 00:22:01.886 "max_subsystems": 1024 00:22:01.886 } 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "method": "nvmf_set_crdt", 00:22:01.886 "params": { 00:22:01.886 "crdt1": 0, 00:22:01.886 "crdt2": 0, 00:22:01.886 "crdt3": 0 00:22:01.886 } 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "method": "nvmf_create_transport", 00:22:01.886 "params": { 00:22:01.886 "trtype": "TCP", 00:22:01.886 "max_queue_depth": 128, 00:22:01.886 "max_io_qpairs_per_ctrlr": 127, 00:22:01.886 "in_capsule_data_size": 4096, 00:22:01.886 "max_io_size": 131072, 00:22:01.886 "io_unit_size": 131072, 00:22:01.886 "max_aq_depth": 128, 00:22:01.886 "num_shared_buffers": 511, 00:22:01.886 "buf_cache_size": 4294967295, 00:22:01.886 "dif_insert_or_strip": false, 00:22:01.886 "zcopy": false, 00:22:01.886 "c2h_success": false, 00:22:01.886 "sock_priority": 0, 00:22:01.886 "abort_timeout_sec": 1, 00:22:01.886 "ack_timeout": 0, 00:22:01.886 "data_wr_pool_size": 0 00:22:01.886 } 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "method": "nvmf_create_subsystem", 00:22:01.886 "params": { 00:22:01.886 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.886 "allow_any_host": false, 00:22:01.886 "serial_number": "SPDK00000000000001", 00:22:01.886 "model_number": "SPDK bdev Controller", 00:22:01.886 "max_namespaces": 10, 00:22:01.886 "min_cntlid": 1, 00:22:01.886 "max_cntlid": 65519, 00:22:01.886 "ana_reporting": false 00:22:01.886 } 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "method": "nvmf_subsystem_add_host", 00:22:01.886 "params": { 00:22:01.886 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.886 "host": "nqn.2016-06.io.spdk:host1", 00:22:01.886 "psk": "/tmp/tmp.s70BQFaoie" 00:22:01.886 } 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "method": "nvmf_subsystem_add_ns", 00:22:01.886 "params": { 00:22:01.886 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.886 "namespace": { 00:22:01.886 "nsid": 1, 00:22:01.886 "bdev_name": "malloc0", 00:22:01.886 "nguid": "D7907031F3BB4BE581B42A43C6184069", 00:22:01.886 "uuid": "d7907031-f3bb-4be5-81b4-2a43c6184069", 00:22:01.886 "no_auto_visible": false 00:22:01.886 } 00:22:01.886 } 00:22:01.886 }, 00:22:01.886 { 00:22:01.886 "method": "nvmf_subsystem_add_listener", 00:22:01.886 "params": { 00:22:01.886 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.886 "listen_address": { 00:22:01.886 "trtype": "TCP", 00:22:01.886 "adrfam": "IPv4", 00:22:01.886 "traddr": "10.0.0.2", 00:22:01.886 "trsvcid": "4420" 00:22:01.886 }, 00:22:01.886 "secure_channel": true 00:22:01.886 } 00:22:01.886 } 00:22:01.886 ] 00:22:01.886 } 00:22:01.886 ] 00:22:01.886 }' 00:22:01.886 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:01.886 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.886 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1421207 00:22:01.886 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:01.886 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1421207 00:22:01.886 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1421207 ']' 00:22:01.886 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.886 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.886 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.886 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.886 23:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.146 [2024-07-25 23:27:59.635805] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:02.146 [2024-07-25 23:27:59.635901] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.146 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.146 [2024-07-25 23:27:59.672157] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:02.146 [2024-07-25 23:27:59.699437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.146 [2024-07-25 23:27:59.781916] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.146 [2024-07-25 23:27:59.781968] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.146 [2024-07-25 23:27:59.781992] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.146 [2024-07-25 23:27:59.782003] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.146 [2024-07-25 23:27:59.782014] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.146 [2024-07-25 23:27:59.782100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.406 [2024-07-25 23:28:00.018872] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.406 [2024-07-25 23:28:00.047955] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:02.406 [2024-07-25 23:28:00.064021] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:02.406 [2024-07-25 23:28:00.064322] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.973 23:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.974 23:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:02.974 23:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:02.974 23:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:02.974 23:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.974 23:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.974 23:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1421356 00:22:02.974 23:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1421356 /var/tmp/bdevperf.sock 00:22:02.974 23:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1421356 ']' 00:22:02.974 23:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.974 23:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:02.974 23:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.974 23:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:02.974 "subsystems": [ 00:22:02.974 { 00:22:02.974 "subsystem": "keyring", 00:22:02.974 "config": [] 00:22:02.974 }, 00:22:02.974 { 00:22:02.974 "subsystem": "iobuf", 00:22:02.974 "config": [ 00:22:02.974 { 00:22:02.974 "method": "iobuf_set_options", 00:22:02.974 "params": { 00:22:02.974 "small_pool_count": 8192, 00:22:02.974 "large_pool_count": 1024, 00:22:02.974 "small_bufsize": 8192, 00:22:02.974 "large_bufsize": 135168 00:22:02.974 } 00:22:02.974 } 00:22:02.974 ] 00:22:02.974 }, 00:22:02.974 { 00:22:02.974 "subsystem": "sock", 00:22:02.974 "config": [ 00:22:02.974 { 00:22:02.974 "method": "sock_set_default_impl", 00:22:02.974 "params": { 00:22:02.974 "impl_name": "posix" 00:22:02.974 } 00:22:02.974 }, 00:22:02.974 { 00:22:02.974 "method": "sock_impl_set_options", 00:22:02.974 "params": { 00:22:02.974 "impl_name": "ssl", 00:22:02.974 "recv_buf_size": 4096, 00:22:02.974 "send_buf_size": 4096, 00:22:02.974 "enable_recv_pipe": true, 00:22:02.974 "enable_quickack": false, 00:22:02.974 "enable_placement_id": 0, 00:22:02.974 "enable_zerocopy_send_server": true, 00:22:02.974 "enable_zerocopy_send_client": false, 00:22:02.974 "zerocopy_threshold": 0, 00:22:02.974 "tls_version": 0, 00:22:02.974 "enable_ktls": false 00:22:02.974 } 00:22:02.974 }, 00:22:02.974 { 00:22:02.974 "method": "sock_impl_set_options", 00:22:02.974 "params": { 00:22:02.974 "impl_name": "posix", 00:22:02.974 "recv_buf_size": 2097152, 00:22:02.974 "send_buf_size": 2097152, 00:22:02.974 "enable_recv_pipe": true, 00:22:02.974 "enable_quickack": false, 00:22:02.974 "enable_placement_id": 0, 00:22:02.974 "enable_zerocopy_send_server": true, 00:22:02.974 "enable_zerocopy_send_client": false, 00:22:02.974 "zerocopy_threshold": 0, 00:22:02.974 "tls_version": 0, 00:22:02.974 "enable_ktls": false 00:22:02.974 } 00:22:02.974 } 00:22:02.974 ] 00:22:02.974 }, 00:22:02.974 { 00:22:02.974 "subsystem": "vmd", 00:22:02.974 "config": [] 00:22:02.974 }, 00:22:02.974 { 00:22:02.974 "subsystem": "accel", 00:22:02.974 "config": [ 00:22:02.974 { 00:22:02.974 "method": "accel_set_options", 00:22:02.974 "params": { 00:22:02.974 "small_cache_size": 128, 00:22:02.974 "large_cache_size": 16, 00:22:02.974 "task_count": 2048, 00:22:02.974 "sequence_count": 2048, 00:22:02.974 "buf_count": 2048 00:22:02.974 } 00:22:02.974 } 00:22:02.974 ] 00:22:02.974 }, 00:22:02.974 { 00:22:02.974 "subsystem": "bdev", 00:22:02.974 "config": [ 00:22:02.974 { 00:22:02.974 "method": "bdev_set_options", 00:22:02.974 "params": { 00:22:02.974 "bdev_io_pool_size": 65535, 00:22:02.974 "bdev_io_cache_size": 256, 00:22:02.974 "bdev_auto_examine": true, 00:22:02.974 "iobuf_small_cache_size": 128, 00:22:02.974 "iobuf_large_cache_size": 16 00:22:02.974 } 00:22:02.974 }, 00:22:02.974 { 00:22:02.974 "method": "bdev_raid_set_options", 00:22:02.974 "params": { 00:22:02.974 "process_window_size_kb": 1024, 00:22:02.974 "process_max_bandwidth_mb_sec": 0 00:22:02.974 } 00:22:02.974 }, 00:22:02.974 { 00:22:02.974 "method": "bdev_iscsi_set_options", 00:22:02.974 "params": { 00:22:02.974 "timeout_sec": 30 00:22:02.974 } 00:22:02.974 }, 00:22:02.974 { 00:22:02.974 "method": "bdev_nvme_set_options", 00:22:02.974 "params": { 00:22:02.974 "action_on_timeout": "none", 00:22:02.974 "timeout_us": 0, 00:22:02.974 "timeout_admin_us": 0, 00:22:02.974 "keep_alive_timeout_ms": 10000, 00:22:02.974 "arbitration_burst": 0, 00:22:02.974 "low_priority_weight": 0, 00:22:02.974 "medium_priority_weight": 0, 00:22:02.974 "high_priority_weight": 0, 00:22:02.974 "nvme_adminq_poll_period_us": 10000, 00:22:02.974 "nvme_ioq_poll_period_us": 0, 00:22:02.974 "io_queue_requests": 512, 00:22:02.974 "delay_cmd_submit": true, 00:22:02.974 "transport_retry_count": 4, 00:22:02.974 "bdev_retry_count": 3, 00:22:02.974 "transport_ack_timeout": 0, 00:22:02.974 "ctrlr_loss_timeout_sec": 0, 00:22:02.974 "reconnect_delay_sec": 0, 00:22:02.974 "fast_io_fail_timeout_sec": 0, 00:22:02.974 "disable_auto_failback": false, 00:22:02.974 "generate_uuids": false, 00:22:02.974 "transport_tos": 0, 00:22:02.974 "nvme_error_stat": false, 00:22:02.974 "rdma_srq_size": 0, 00:22:02.974 "io_path_stat": false, 00:22:02.974 "allow_accel_sequence": false, 00:22:02.974 "rdma_max_cq_size": 0, 00:22:02.974 "rdma_cm_event_timeout_ms": 0, 00:22:02.974 "dhchap_digests": [ 00:22:02.974 "sha256", 00:22:02.974 "sha384", 00:22:02.974 "sha512" 00:22:02.974 ], 00:22:02.974 "dhchap_dhgroups": [ 00:22:02.974 "null", 00:22:02.974 "ffdhe2048", 00:22:02.974 "ffdhe3072", 00:22:02.974 "ffdhe4096", 00:22:02.974 "ffdhe6144", 00:22:02.974 "ffdhe8192" 00:22:02.974 ] 00:22:02.974 } 00:22:02.974 }, 00:22:02.974 { 00:22:02.974 "method": "bdev_nvme_attach_controller", 00:22:02.974 "params": { 00:22:02.974 "name": "TLSTEST", 00:22:02.974 "trtype": "TCP", 00:22:02.974 "adrfam": "IPv4", 00:22:02.974 "traddr": "10.0.0.2", 00:22:02.974 "trsvcid": "4420", 00:22:02.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.974 "prchk_reftag": false, 00:22:02.974 "prchk_guard": false, 00:22:02.974 "ctrlr_loss_timeout_sec": 0, 00:22:02.974 "reconnect_delay_sec": 0, 00:22:02.974 "fast_io_fail_timeout_sec": 0, 00:22:02.974 "psk": "/tmp/tmp.s70BQFaoie", 00:22:02.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.974 "hdgst": false, 00:22:02.974 "ddgst": false 00:22:02.974 } 00:22:02.974 }, 00:22:02.974 { 00:22:02.974 "method": "bdev_nvme_set_hotplug", 00:22:02.974 "params": { 00:22:02.974 "period_us": 100000, 00:22:02.974 "enable": false 00:22:02.974 } 00:22:02.974 }, 00:22:02.974 { 00:22:02.974 "method": "bdev_wait_for_examine" 00:22:02.974 } 00:22:02.974 ] 00:22:02.974 }, 00:22:02.974 { 00:22:02.974 "subsystem": "nbd", 00:22:02.974 "config": [] 00:22:02.974 } 00:22:02.974 ] 00:22:02.974 }' 00:22:02.974 23:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.974 23:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.974 23:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.974 [2024-07-25 23:28:00.649500] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:02.974 [2024-07-25 23:28:00.649589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1421356 ] 00:22:02.974 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.974 [2024-07-25 23:28:00.680906] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:03.234 [2024-07-25 23:28:00.708609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.234 [2024-07-25 23:28:00.791643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.495 [2024-07-25 23:28:00.962925] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.495 [2024-07-25 23:28:00.963057] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:04.063 23:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:04.063 23:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:04.063 23:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:04.063 Running I/O for 10 seconds... 00:22:14.082 00:22:14.082 Latency(us) 00:22:14.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.082 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:14.082 Verification LBA range: start 0x0 length 0x2000 00:22:14.082 TLSTESTn1 : 10.02 3482.02 13.60 0.00 0.00 36691.57 6505.05 39224.51 00:22:14.082 =================================================================================================================== 00:22:14.082 Total : 3482.02 13.60 0.00 0.00 36691.57 6505.05 39224.51 00:22:14.082 0 00:22:14.082 23:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:14.082 23:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 1421356 00:22:14.082 23:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1421356 ']' 00:22:14.082 23:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1421356 00:22:14.082 23:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:14.082 23:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:14.082 23:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1421356 00:22:14.341 23:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:14.341 23:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:14.341 23:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1421356' 00:22:14.341 killing process with pid 1421356 00:22:14.341 23:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1421356 00:22:14.341 Received shutdown signal, test time was about 10.000000 seconds 00:22:14.341 00:22:14.341 Latency(us) 00:22:14.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.341 =================================================================================================================== 00:22:14.341 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:14.341 [2024-07-25 23:28:11.816176] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:14.341 23:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1421356 00:22:14.341 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 1421207 00:22:14.341 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1421207 ']' 00:22:14.341 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1421207 00:22:14.341 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:14.341 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:14.341 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1421207 00:22:14.341 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:14.341 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:14.341 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1421207' 00:22:14.341 killing process with pid 1421207 00:22:14.341 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1421207 00:22:14.341 [2024-07-25 23:28:12.064465] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:14.341 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1421207 00:22:14.599 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:14.599 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:14.599 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:14.599 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:14.599 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1422685 00:22:14.600 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:14.600 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1422685 00:22:14.600 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1422685 ']' 00:22:14.600 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.600 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:14.600 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.600 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:14.600 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:14.859 [2024-07-25 23:28:12.363374] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:14.859 [2024-07-25 23:28:12.363456] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.859 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.859 [2024-07-25 23:28:12.409859] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:14.859 [2024-07-25 23:28:12.442049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.859 [2024-07-25 23:28:12.537053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.859 [2024-07-25 23:28:12.537129] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.859 [2024-07-25 23:28:12.537146] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.859 [2024-07-25 23:28:12.537159] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.859 [2024-07-25 23:28:12.537171] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.859 [2024-07-25 23:28:12.537210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.118 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:15.118 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:15.118 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:15.118 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:15.118 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.118 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.118 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.s70BQFaoie 00:22:15.118 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.s70BQFaoie 00:22:15.118 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:15.376 [2024-07-25 23:28:12.957954] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.376 23:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:15.633 23:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:15.890 [2024-07-25 23:28:13.483396] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:15.890 [2024-07-25 23:28:13.483659] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.890 23:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:16.146 malloc0 00:22:16.146 23:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:16.404 23:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s70BQFaoie 00:22:16.661 [2024-07-25 23:28:14.228577] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:16.661 23:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1422968 00:22:16.661 23:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:16.661 23:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.661 23:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1422968 /var/tmp/bdevperf.sock 00:22:16.661 23:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1422968 ']' 00:22:16.661 23:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.661 23:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.661 23:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.662 23:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.662 23:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.662 [2024-07-25 23:28:14.285691] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:16.662 [2024-07-25 23:28:14.285776] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422968 ] 00:22:16.662 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.662 [2024-07-25 23:28:14.318160] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:16.662 [2024-07-25 23:28:14.349771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.918 [2024-07-25 23:28:14.440762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.918 23:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:16.918 23:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:16.918 23:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.s70BQFaoie 00:22:17.175 23:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:17.432 [2024-07-25 23:28:15.012788] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:17.432 nvme0n1 00:22:17.432 23:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:17.691 Running I/O for 1 seconds... 00:22:18.629 00:22:18.629 Latency(us) 00:22:18.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.629 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:18.629 Verification LBA range: start 0x0 length 0x2000 00:22:18.629 nvme0n1 : 1.02 3132.90 12.24 0.00 0.00 40513.49 7864.32 40389.59 00:22:18.629 =================================================================================================================== 00:22:18.629 Total : 3132.90 12.24 0.00 0.00 40513.49 7864.32 40389.59 00:22:18.629 0 00:22:18.629 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 1422968 00:22:18.629 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1422968 ']' 00:22:18.629 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1422968 00:22:18.629 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:18.629 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:18.629 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1422968 00:22:18.629 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:18.629 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:18.629 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1422968' 00:22:18.629 killing process with pid 1422968 00:22:18.629 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1422968 00:22:18.629 Received shutdown signal, test time was about 1.000000 seconds 00:22:18.629 00:22:18.629 Latency(us) 00:22:18.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.629 =================================================================================================================== 00:22:18.629 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:18.629 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1422968 00:22:18.888 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 1422685 00:22:18.888 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1422685 ']' 00:22:18.888 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1422685 00:22:18.888 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:18.888 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:18.888 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1422685 00:22:18.888 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:18.888 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:18.888 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1422685' 00:22:18.888 killing process with pid 1422685 00:22:18.888 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1422685 00:22:18.888 [2024-07-25 23:28:16.534928] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:18.888 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1422685 00:22:19.146 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:22:19.146 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.146 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:19.146 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.146 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1423244 00:22:19.146 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:19.146 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1423244 00:22:19.146 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1423244 ']' 00:22:19.146 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.146 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.146 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.146 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.146 23:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.146 [2024-07-25 23:28:16.837807] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:19.146 [2024-07-25 23:28:16.837908] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.404 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.404 [2024-07-25 23:28:16.876314] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:19.404 [2024-07-25 23:28:16.908564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.404 [2024-07-25 23:28:16.996817] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.404 [2024-07-25 23:28:16.996884] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.404 [2024-07-25 23:28:16.996912] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.404 [2024-07-25 23:28:16.996926] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.404 [2024-07-25 23:28:16.996938] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.404 [2024-07-25 23:28:16.996973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.404 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:19.404 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:19.404 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:19.404 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:19.404 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.404 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.404 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:22:19.404 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.404 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.662 [2024-07-25 23:28:17.135261] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.662 malloc0 00:22:19.662 [2024-07-25 23:28:17.166847] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:19.662 [2024-07-25 23:28:17.176257] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.663 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.663 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1423270 00:22:19.663 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:19.663 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1423270 /var/tmp/bdevperf.sock 00:22:19.663 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1423270 ']' 00:22:19.663 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.663 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.663 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.663 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.663 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.663 [2024-07-25 23:28:17.244851] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:19.663 [2024-07-25 23:28:17.244921] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1423270 ] 00:22:19.663 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.663 [2024-07-25 23:28:17.277534] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:19.663 [2024-07-25 23:28:17.308344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.920 [2024-07-25 23:28:17.401440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.920 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:19.920 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:19.920 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.s70BQFaoie 00:22:20.178 23:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:20.436 [2024-07-25 23:28:17.976891] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:20.436 nvme0n1 00:22:20.436 23:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:20.695 Running I/O for 1 seconds... 00:22:21.630 00:22:21.630 Latency(us) 00:22:21.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.630 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:21.630 Verification LBA range: start 0x0 length 0x2000 00:22:21.630 nvme0n1 : 1.02 3181.83 12.43 0.00 0.00 39810.13 9077.95 35923.44 00:22:21.630 =================================================================================================================== 00:22:21.630 Total : 3181.83 12.43 0.00 0.00 39810.13 9077.95 35923.44 00:22:21.630 0 00:22:21.630 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:22:21.630 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.630 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.630 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.630 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:22:21.630 "subsystems": [ 00:22:21.630 { 00:22:21.630 "subsystem": "keyring", 00:22:21.630 "config": [ 00:22:21.630 { 00:22:21.630 "method": "keyring_file_add_key", 00:22:21.630 "params": { 00:22:21.630 "name": "key0", 00:22:21.630 "path": "/tmp/tmp.s70BQFaoie" 00:22:21.630 } 00:22:21.630 } 00:22:21.630 ] 00:22:21.630 }, 00:22:21.630 { 00:22:21.630 "subsystem": "iobuf", 00:22:21.630 "config": [ 00:22:21.630 { 00:22:21.630 "method": "iobuf_set_options", 00:22:21.630 "params": { 00:22:21.630 "small_pool_count": 8192, 00:22:21.630 "large_pool_count": 1024, 00:22:21.630 "small_bufsize": 8192, 00:22:21.630 "large_bufsize": 135168 00:22:21.630 } 00:22:21.630 } 00:22:21.630 ] 00:22:21.630 }, 00:22:21.630 { 00:22:21.630 "subsystem": "sock", 00:22:21.630 "config": [ 00:22:21.630 { 00:22:21.630 "method": "sock_set_default_impl", 00:22:21.630 "params": { 00:22:21.630 "impl_name": "posix" 00:22:21.630 } 00:22:21.630 }, 00:22:21.630 { 00:22:21.630 "method": "sock_impl_set_options", 00:22:21.630 "params": { 00:22:21.630 "impl_name": "ssl", 00:22:21.630 "recv_buf_size": 4096, 00:22:21.630 "send_buf_size": 4096, 00:22:21.630 "enable_recv_pipe": true, 00:22:21.630 "enable_quickack": false, 00:22:21.630 "enable_placement_id": 0, 00:22:21.630 "enable_zerocopy_send_server": true, 00:22:21.630 "enable_zerocopy_send_client": false, 00:22:21.630 "zerocopy_threshold": 0, 00:22:21.630 "tls_version": 0, 00:22:21.630 "enable_ktls": false 00:22:21.630 } 00:22:21.630 }, 00:22:21.630 { 00:22:21.630 "method": "sock_impl_set_options", 00:22:21.630 "params": { 00:22:21.630 "impl_name": "posix", 00:22:21.630 "recv_buf_size": 2097152, 00:22:21.630 "send_buf_size": 2097152, 00:22:21.630 "enable_recv_pipe": true, 00:22:21.630 "enable_quickack": false, 00:22:21.630 "enable_placement_id": 0, 00:22:21.630 "enable_zerocopy_send_server": true, 00:22:21.630 "enable_zerocopy_send_client": false, 00:22:21.630 "zerocopy_threshold": 0, 00:22:21.630 "tls_version": 0, 00:22:21.630 "enable_ktls": false 00:22:21.630 } 00:22:21.630 } 00:22:21.630 ] 00:22:21.630 }, 00:22:21.630 { 00:22:21.630 "subsystem": "vmd", 00:22:21.630 "config": [] 00:22:21.630 }, 00:22:21.630 { 00:22:21.630 "subsystem": "accel", 00:22:21.630 "config": [ 00:22:21.630 { 00:22:21.630 "method": "accel_set_options", 00:22:21.630 "params": { 00:22:21.630 "small_cache_size": 128, 00:22:21.630 "large_cache_size": 16, 00:22:21.630 "task_count": 2048, 00:22:21.630 "sequence_count": 2048, 00:22:21.630 "buf_count": 2048 00:22:21.630 } 00:22:21.630 } 00:22:21.630 ] 00:22:21.630 }, 00:22:21.630 { 00:22:21.630 "subsystem": "bdev", 00:22:21.630 "config": [ 00:22:21.630 { 00:22:21.630 "method": "bdev_set_options", 00:22:21.630 "params": { 00:22:21.630 "bdev_io_pool_size": 65535, 00:22:21.630 "bdev_io_cache_size": 256, 00:22:21.630 "bdev_auto_examine": true, 00:22:21.630 "iobuf_small_cache_size": 128, 00:22:21.630 "iobuf_large_cache_size": 16 00:22:21.630 } 00:22:21.630 }, 00:22:21.630 { 00:22:21.630 "method": "bdev_raid_set_options", 00:22:21.630 "params": { 00:22:21.630 "process_window_size_kb": 1024, 00:22:21.630 "process_max_bandwidth_mb_sec": 0 00:22:21.630 } 00:22:21.630 }, 00:22:21.630 { 00:22:21.630 "method": "bdev_iscsi_set_options", 00:22:21.630 "params": { 00:22:21.630 "timeout_sec": 30 00:22:21.630 } 00:22:21.630 }, 00:22:21.630 { 00:22:21.630 "method": "bdev_nvme_set_options", 00:22:21.630 "params": { 00:22:21.630 "action_on_timeout": "none", 00:22:21.630 "timeout_us": 0, 00:22:21.630 "timeout_admin_us": 0, 00:22:21.630 "keep_alive_timeout_ms": 10000, 00:22:21.630 "arbitration_burst": 0, 00:22:21.630 "low_priority_weight": 0, 00:22:21.630 "medium_priority_weight": 0, 00:22:21.630 "high_priority_weight": 0, 00:22:21.630 "nvme_adminq_poll_period_us": 10000, 00:22:21.630 "nvme_ioq_poll_period_us": 0, 00:22:21.630 "io_queue_requests": 0, 00:22:21.630 "delay_cmd_submit": true, 00:22:21.630 "transport_retry_count": 4, 00:22:21.630 "bdev_retry_count": 3, 00:22:21.630 "transport_ack_timeout": 0, 00:22:21.630 "ctrlr_loss_timeout_sec": 0, 00:22:21.630 "reconnect_delay_sec": 0, 00:22:21.630 "fast_io_fail_timeout_sec": 0, 00:22:21.630 "disable_auto_failback": false, 00:22:21.630 "generate_uuids": false, 00:22:21.630 "transport_tos": 0, 00:22:21.630 "nvme_error_stat": false, 00:22:21.630 "rdma_srq_size": 0, 00:22:21.630 "io_path_stat": false, 00:22:21.630 "allow_accel_sequence": false, 00:22:21.630 "rdma_max_cq_size": 0, 00:22:21.630 "rdma_cm_event_timeout_ms": 0, 00:22:21.630 "dhchap_digests": [ 00:22:21.630 "sha256", 00:22:21.630 "sha384", 00:22:21.630 "sha512" 00:22:21.630 ], 00:22:21.630 "dhchap_dhgroups": [ 00:22:21.630 "null", 00:22:21.630 "ffdhe2048", 00:22:21.630 "ffdhe3072", 00:22:21.630 "ffdhe4096", 00:22:21.630 "ffdhe6144", 00:22:21.630 "ffdhe8192" 00:22:21.630 ] 00:22:21.630 } 00:22:21.630 }, 00:22:21.630 { 00:22:21.630 "method": "bdev_nvme_set_hotplug", 00:22:21.630 "params": { 00:22:21.630 "period_us": 100000, 00:22:21.630 "enable": false 00:22:21.630 } 00:22:21.630 }, 00:22:21.630 { 00:22:21.630 "method": "bdev_malloc_create", 00:22:21.630 "params": { 00:22:21.630 "name": "malloc0", 00:22:21.630 "num_blocks": 8192, 00:22:21.630 "block_size": 4096, 00:22:21.630 "physical_block_size": 4096, 00:22:21.630 "uuid": "ed4a6b2a-2e17-4fb8-bd09-5b409289effd", 00:22:21.630 "optimal_io_boundary": 0, 00:22:21.630 "md_size": 0, 00:22:21.630 "dif_type": 0, 00:22:21.630 "dif_is_head_of_md": false, 00:22:21.630 "dif_pi_format": 0 00:22:21.630 } 00:22:21.630 }, 00:22:21.630 { 00:22:21.630 "method": "bdev_wait_for_examine" 00:22:21.630 } 00:22:21.630 ] 00:22:21.630 }, 00:22:21.630 { 00:22:21.630 "subsystem": "nbd", 00:22:21.630 "config": [] 00:22:21.630 }, 00:22:21.630 { 00:22:21.630 "subsystem": "scheduler", 00:22:21.630 "config": [ 00:22:21.630 { 00:22:21.630 "method": "framework_set_scheduler", 00:22:21.630 "params": { 00:22:21.630 "name": "static" 00:22:21.630 } 00:22:21.630 } 00:22:21.630 ] 00:22:21.630 }, 00:22:21.630 { 00:22:21.630 "subsystem": "nvmf", 00:22:21.630 "config": [ 00:22:21.630 { 00:22:21.630 "method": "nvmf_set_config", 00:22:21.630 "params": { 00:22:21.630 "discovery_filter": "match_any", 00:22:21.630 "admin_cmd_passthru": { 00:22:21.630 "identify_ctrlr": false 00:22:21.630 } 00:22:21.630 } 00:22:21.630 }, 00:22:21.630 { 00:22:21.631 "method": "nvmf_set_max_subsystems", 00:22:21.631 "params": { 00:22:21.631 "max_subsystems": 1024 00:22:21.631 } 00:22:21.631 }, 00:22:21.631 { 00:22:21.631 "method": "nvmf_set_crdt", 00:22:21.631 "params": { 00:22:21.631 "crdt1": 0, 00:22:21.631 "crdt2": 0, 00:22:21.631 "crdt3": 0 00:22:21.631 } 00:22:21.631 }, 00:22:21.631 { 00:22:21.631 "method": "nvmf_create_transport", 00:22:21.631 "params": { 00:22:21.631 "trtype": "TCP", 00:22:21.631 "max_queue_depth": 128, 00:22:21.631 "max_io_qpairs_per_ctrlr": 127, 00:22:21.631 "in_capsule_data_size": 4096, 00:22:21.631 "max_io_size": 131072, 00:22:21.631 "io_unit_size": 131072, 00:22:21.631 "max_aq_depth": 128, 00:22:21.631 "num_shared_buffers": 511, 00:22:21.631 "buf_cache_size": 4294967295, 00:22:21.631 "dif_insert_or_strip": false, 00:22:21.631 "zcopy": false, 00:22:21.631 "c2h_success": false, 00:22:21.631 "sock_priority": 0, 00:22:21.631 "abort_timeout_sec": 1, 00:22:21.631 "ack_timeout": 0, 00:22:21.631 "data_wr_pool_size": 0 00:22:21.631 } 00:22:21.631 }, 00:22:21.631 { 00:22:21.631 "method": "nvmf_create_subsystem", 00:22:21.631 "params": { 00:22:21.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.631 "allow_any_host": false, 00:22:21.631 "serial_number": "00000000000000000000", 00:22:21.631 "model_number": "SPDK bdev Controller", 00:22:21.631 "max_namespaces": 32, 00:22:21.631 "min_cntlid": 1, 00:22:21.631 "max_cntlid": 65519, 00:22:21.631 "ana_reporting": false 00:22:21.631 } 00:22:21.631 }, 00:22:21.631 { 00:22:21.631 "method": "nvmf_subsystem_add_host", 00:22:21.631 "params": { 00:22:21.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.631 "host": "nqn.2016-06.io.spdk:host1", 00:22:21.631 "psk": "key0" 00:22:21.631 } 00:22:21.631 }, 00:22:21.631 { 00:22:21.631 "method": "nvmf_subsystem_add_ns", 00:22:21.631 "params": { 00:22:21.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.631 "namespace": { 00:22:21.631 "nsid": 1, 00:22:21.631 "bdev_name": "malloc0", 00:22:21.631 "nguid": "ED4A6B2A2E174FB8BD095B409289EFFD", 00:22:21.631 "uuid": "ed4a6b2a-2e17-4fb8-bd09-5b409289effd", 00:22:21.631 "no_auto_visible": false 00:22:21.631 } 00:22:21.631 } 00:22:21.631 }, 00:22:21.631 { 00:22:21.631 "method": "nvmf_subsystem_add_listener", 00:22:21.631 "params": { 00:22:21.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.631 "listen_address": { 00:22:21.631 "trtype": "TCP", 00:22:21.631 "adrfam": "IPv4", 00:22:21.631 "traddr": "10.0.0.2", 00:22:21.631 "trsvcid": "4420" 00:22:21.631 }, 00:22:21.631 "secure_channel": false, 00:22:21.631 "sock_impl": "ssl" 00:22:21.631 } 00:22:21.631 } 00:22:21.631 ] 00:22:21.631 } 00:22:21.631 ] 00:22:21.631 }' 00:22:21.631 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:22.201 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:22:22.201 "subsystems": [ 00:22:22.201 { 00:22:22.201 "subsystem": "keyring", 00:22:22.201 "config": [ 00:22:22.201 { 00:22:22.201 "method": "keyring_file_add_key", 00:22:22.201 "params": { 00:22:22.201 "name": "key0", 00:22:22.201 "path": "/tmp/tmp.s70BQFaoie" 00:22:22.201 } 00:22:22.201 } 00:22:22.201 ] 00:22:22.201 }, 00:22:22.201 { 00:22:22.201 "subsystem": "iobuf", 00:22:22.201 "config": [ 00:22:22.201 { 00:22:22.201 "method": "iobuf_set_options", 00:22:22.201 "params": { 00:22:22.201 "small_pool_count": 8192, 00:22:22.201 "large_pool_count": 1024, 00:22:22.201 "small_bufsize": 8192, 00:22:22.201 "large_bufsize": 135168 00:22:22.201 } 00:22:22.201 } 00:22:22.201 ] 00:22:22.201 }, 00:22:22.201 { 00:22:22.201 "subsystem": "sock", 00:22:22.201 "config": [ 00:22:22.201 { 00:22:22.201 "method": "sock_set_default_impl", 00:22:22.201 "params": { 00:22:22.201 "impl_name": "posix" 00:22:22.201 } 00:22:22.201 }, 00:22:22.201 { 00:22:22.201 "method": "sock_impl_set_options", 00:22:22.201 "params": { 00:22:22.201 "impl_name": "ssl", 00:22:22.201 "recv_buf_size": 4096, 00:22:22.201 "send_buf_size": 4096, 00:22:22.201 "enable_recv_pipe": true, 00:22:22.201 "enable_quickack": false, 00:22:22.201 "enable_placement_id": 0, 00:22:22.201 "enable_zerocopy_send_server": true, 00:22:22.201 "enable_zerocopy_send_client": false, 00:22:22.201 "zerocopy_threshold": 0, 00:22:22.201 "tls_version": 0, 00:22:22.201 "enable_ktls": false 00:22:22.201 } 00:22:22.201 }, 00:22:22.201 { 00:22:22.201 "method": "sock_impl_set_options", 00:22:22.201 "params": { 00:22:22.201 "impl_name": "posix", 00:22:22.201 "recv_buf_size": 2097152, 00:22:22.201 "send_buf_size": 2097152, 00:22:22.201 "enable_recv_pipe": true, 00:22:22.201 "enable_quickack": false, 00:22:22.201 "enable_placement_id": 0, 00:22:22.201 "enable_zerocopy_send_server": true, 00:22:22.201 "enable_zerocopy_send_client": false, 00:22:22.201 "zerocopy_threshold": 0, 00:22:22.201 "tls_version": 0, 00:22:22.201 "enable_ktls": false 00:22:22.201 } 00:22:22.201 } 00:22:22.201 ] 00:22:22.201 }, 00:22:22.201 { 00:22:22.201 "subsystem": "vmd", 00:22:22.201 "config": [] 00:22:22.201 }, 00:22:22.201 { 00:22:22.201 "subsystem": "accel", 00:22:22.201 "config": [ 00:22:22.201 { 00:22:22.201 "method": "accel_set_options", 00:22:22.201 "params": { 00:22:22.201 "small_cache_size": 128, 00:22:22.201 "large_cache_size": 16, 00:22:22.201 "task_count": 2048, 00:22:22.201 "sequence_count": 2048, 00:22:22.201 "buf_count": 2048 00:22:22.201 } 00:22:22.201 } 00:22:22.201 ] 00:22:22.201 }, 00:22:22.201 { 00:22:22.201 "subsystem": "bdev", 00:22:22.201 "config": [ 00:22:22.201 { 00:22:22.201 "method": "bdev_set_options", 00:22:22.201 "params": { 00:22:22.201 "bdev_io_pool_size": 65535, 00:22:22.201 "bdev_io_cache_size": 256, 00:22:22.201 "bdev_auto_examine": true, 00:22:22.201 "iobuf_small_cache_size": 128, 00:22:22.201 "iobuf_large_cache_size": 16 00:22:22.201 } 00:22:22.201 }, 00:22:22.201 { 00:22:22.201 "method": "bdev_raid_set_options", 00:22:22.201 "params": { 00:22:22.201 "process_window_size_kb": 1024, 00:22:22.201 "process_max_bandwidth_mb_sec": 0 00:22:22.201 } 00:22:22.201 }, 00:22:22.201 { 00:22:22.201 "method": "bdev_iscsi_set_options", 00:22:22.201 "params": { 00:22:22.201 "timeout_sec": 30 00:22:22.201 } 00:22:22.201 }, 00:22:22.201 { 00:22:22.201 "method": "bdev_nvme_set_options", 00:22:22.201 "params": { 00:22:22.201 "action_on_timeout": "none", 00:22:22.201 "timeout_us": 0, 00:22:22.201 "timeout_admin_us": 0, 00:22:22.201 "keep_alive_timeout_ms": 10000, 00:22:22.201 "arbitration_burst": 0, 00:22:22.201 "low_priority_weight": 0, 00:22:22.201 "medium_priority_weight": 0, 00:22:22.201 "high_priority_weight": 0, 00:22:22.201 "nvme_adminq_poll_period_us": 10000, 00:22:22.201 "nvme_ioq_poll_period_us": 0, 00:22:22.201 "io_queue_requests": 512, 00:22:22.201 "delay_cmd_submit": true, 00:22:22.201 "transport_retry_count": 4, 00:22:22.201 "bdev_retry_count": 3, 00:22:22.201 "transport_ack_timeout": 0, 00:22:22.201 "ctrlr_loss_timeout_sec": 0, 00:22:22.201 "reconnect_delay_sec": 0, 00:22:22.201 "fast_io_fail_timeout_sec": 0, 00:22:22.201 "disable_auto_failback": false, 00:22:22.201 "generate_uuids": false, 00:22:22.201 "transport_tos": 0, 00:22:22.201 "nvme_error_stat": false, 00:22:22.201 "rdma_srq_size": 0, 00:22:22.201 "io_path_stat": false, 00:22:22.201 "allow_accel_sequence": false, 00:22:22.201 "rdma_max_cq_size": 0, 00:22:22.201 "rdma_cm_event_timeout_ms": 0, 00:22:22.201 "dhchap_digests": [ 00:22:22.201 "sha256", 00:22:22.201 "sha384", 00:22:22.201 "sha512" 00:22:22.201 ], 00:22:22.201 "dhchap_dhgroups": [ 00:22:22.201 "null", 00:22:22.201 "ffdhe2048", 00:22:22.201 "ffdhe3072", 00:22:22.201 "ffdhe4096", 00:22:22.201 "ffdhe6144", 00:22:22.201 "ffdhe8192" 00:22:22.201 ] 00:22:22.201 } 00:22:22.201 }, 00:22:22.201 { 00:22:22.201 "method": "bdev_nvme_attach_controller", 00:22:22.201 "params": { 00:22:22.201 "name": "nvme0", 00:22:22.201 "trtype": "TCP", 00:22:22.201 "adrfam": "IPv4", 00:22:22.201 "traddr": "10.0.0.2", 00:22:22.201 "trsvcid": "4420", 00:22:22.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.201 "prchk_reftag": false, 00:22:22.201 "prchk_guard": false, 00:22:22.201 "ctrlr_loss_timeout_sec": 0, 00:22:22.201 "reconnect_delay_sec": 0, 00:22:22.201 "fast_io_fail_timeout_sec": 0, 00:22:22.201 "psk": "key0", 00:22:22.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:22.201 "hdgst": false, 00:22:22.201 "ddgst": false 00:22:22.201 } 00:22:22.201 }, 00:22:22.201 { 00:22:22.201 "method": "bdev_nvme_set_hotplug", 00:22:22.201 "params": { 00:22:22.201 "period_us": 100000, 00:22:22.202 "enable": false 00:22:22.202 } 00:22:22.202 }, 00:22:22.202 { 00:22:22.202 "method": "bdev_enable_histogram", 00:22:22.202 "params": { 00:22:22.202 "name": "nvme0n1", 00:22:22.202 "enable": true 00:22:22.202 } 00:22:22.202 }, 00:22:22.202 { 00:22:22.202 "method": "bdev_wait_for_examine" 00:22:22.202 } 00:22:22.202 ] 00:22:22.202 }, 00:22:22.202 { 00:22:22.202 "subsystem": "nbd", 00:22:22.202 "config": [] 00:22:22.202 } 00:22:22.202 ] 00:22:22.202 }' 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 1423270 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1423270 ']' 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1423270 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1423270 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1423270' 00:22:22.202 killing process with pid 1423270 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1423270 00:22:22.202 Received shutdown signal, test time was about 1.000000 seconds 00:22:22.202 00:22:22.202 Latency(us) 00:22:22.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.202 =================================================================================================================== 00:22:22.202 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1423270 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 1423244 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1423244 ']' 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1423244 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1423244 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1423244' 00:22:22.202 killing process with pid 1423244 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1423244 00:22:22.202 23:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1423244 00:22:22.461 23:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:22:22.461 23:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:22:22.461 "subsystems": [ 00:22:22.461 { 00:22:22.461 "subsystem": "keyring", 00:22:22.461 "config": [ 00:22:22.461 { 00:22:22.461 "method": "keyring_file_add_key", 00:22:22.461 "params": { 00:22:22.461 "name": "key0", 00:22:22.461 "path": "/tmp/tmp.s70BQFaoie" 00:22:22.461 } 00:22:22.461 } 00:22:22.461 ] 00:22:22.461 }, 00:22:22.461 { 00:22:22.461 "subsystem": "iobuf", 00:22:22.461 "config": [ 00:22:22.461 { 00:22:22.461 "method": "iobuf_set_options", 00:22:22.461 "params": { 00:22:22.461 "small_pool_count": 8192, 00:22:22.461 "large_pool_count": 1024, 00:22:22.461 "small_bufsize": 8192, 00:22:22.461 "large_bufsize": 135168 00:22:22.461 } 00:22:22.461 } 00:22:22.461 ] 00:22:22.461 }, 00:22:22.461 { 00:22:22.461 "subsystem": "sock", 00:22:22.461 "config": [ 00:22:22.461 { 00:22:22.461 "method": "sock_set_default_impl", 00:22:22.461 "params": { 00:22:22.461 "impl_name": "posix" 00:22:22.461 } 00:22:22.461 }, 00:22:22.461 { 00:22:22.461 "method": "sock_impl_set_options", 00:22:22.461 "params": { 00:22:22.461 "impl_name": "ssl", 00:22:22.461 "recv_buf_size": 4096, 00:22:22.461 "send_buf_size": 4096, 00:22:22.461 "enable_recv_pipe": true, 00:22:22.461 "enable_quickack": false, 00:22:22.461 "enable_placement_id": 0, 00:22:22.461 "enable_zerocopy_send_server": true, 00:22:22.461 "enable_zerocopy_send_client": false, 00:22:22.461 "zerocopy_threshold": 0, 00:22:22.461 "tls_version": 0, 00:22:22.461 "enable_ktls": false 00:22:22.461 } 00:22:22.461 }, 00:22:22.461 { 00:22:22.461 "method": "sock_impl_set_options", 00:22:22.461 "params": { 00:22:22.461 "impl_name": "posix", 00:22:22.461 "recv_buf_size": 2097152, 00:22:22.461 "send_buf_size": 2097152, 00:22:22.461 "enable_recv_pipe": true, 00:22:22.461 "enable_quickack": false, 00:22:22.461 "enable_placement_id": 0, 00:22:22.461 "enable_zerocopy_send_server": true, 00:22:22.461 "enable_zerocopy_send_client": false, 00:22:22.461 "zerocopy_threshold": 0, 00:22:22.461 "tls_version": 0, 00:22:22.461 "enable_ktls": false 00:22:22.461 } 00:22:22.461 } 00:22:22.461 ] 00:22:22.461 }, 00:22:22.461 { 00:22:22.461 "subsystem": "vmd", 00:22:22.461 "config": [] 00:22:22.461 }, 00:22:22.461 { 00:22:22.461 "subsystem": "accel", 00:22:22.461 "config": [ 00:22:22.461 { 00:22:22.461 "method": "accel_set_options", 00:22:22.461 "params": { 00:22:22.461 "small_cache_size": 128, 00:22:22.461 "large_cache_size": 16, 00:22:22.461 "task_count": 2048, 00:22:22.461 "sequence_count": 2048, 00:22:22.461 "buf_count": 2048 00:22:22.461 } 00:22:22.461 } 00:22:22.461 ] 00:22:22.461 }, 00:22:22.461 { 00:22:22.461 "subsystem": "bdev", 00:22:22.461 "config": [ 00:22:22.461 { 00:22:22.461 "method": "bdev_set_options", 00:22:22.461 "params": { 00:22:22.461 "bdev_io_pool_size": 65535, 00:22:22.461 "bdev_io_cache_size": 256, 00:22:22.461 "bdev_auto_examine": true, 00:22:22.461 "iobuf_small_cache_size": 128, 00:22:22.461 "iobuf_large_cache_size": 16 00:22:22.461 } 00:22:22.461 }, 00:22:22.461 { 00:22:22.461 "method": "bdev_raid_set_options", 00:22:22.461 "params": { 00:22:22.461 "process_window_size_kb": 1024, 00:22:22.461 "process_max_bandwidth_mb_sec": 0 00:22:22.461 } 00:22:22.461 }, 00:22:22.461 { 00:22:22.461 "method": "bdev_iscsi_set_options", 00:22:22.461 "params": { 00:22:22.461 "timeout_sec": 30 00:22:22.461 } 00:22:22.461 }, 00:22:22.461 { 00:22:22.461 "method": "bdev_nvme_set_options", 00:22:22.461 "params": { 00:22:22.461 "action_on_timeout": "none", 00:22:22.461 "timeout_us": 0, 00:22:22.461 "timeout_admin_us": 0, 00:22:22.461 "keep_alive_timeout_ms": 10000, 00:22:22.461 "arbitration_burst": 0, 00:22:22.461 "low_priority_weight": 0, 00:22:22.461 "medium_priority_weight": 0, 00:22:22.461 "high_priority_weight": 0, 00:22:22.461 "nvme_adminq_poll_period_us": 10000, 00:22:22.461 "nvme_ioq_poll_period_us": 0, 00:22:22.461 "io_queue_requests": 0, 00:22:22.461 "delay_cmd_submit": true, 00:22:22.461 "transport_retry_count": 4, 00:22:22.461 "bdev_retry_count": 3, 00:22:22.461 "transport_ack_timeout": 0, 00:22:22.461 "ctrlr_loss_timeout_sec": 0, 00:22:22.461 "reconnect_delay_sec": 0, 00:22:22.461 "fast_io_fail_timeout_sec": 0, 00:22:22.461 "disable_auto_failback": false, 00:22:22.461 "generate_uuids": false, 00:22:22.461 "transport_tos": 0, 00:22:22.461 "nvme_error_stat": false, 00:22:22.461 "rdma_srq_size": 0, 00:22:22.461 "io_path_stat": false, 00:22:22.461 "allow_accel_sequence": false, 00:22:22.461 "rdma_max_cq_size": 0, 00:22:22.461 "rdma_cm_event_timeout_ms": 0, 00:22:22.461 "dhchap_digests": [ 00:22:22.461 "sha256", 00:22:22.461 "sha384", 00:22:22.461 "sha512" 00:22:22.461 ], 00:22:22.461 "dhchap_dhgroups": [ 00:22:22.461 "null", 00:22:22.461 "ffdhe2048", 00:22:22.461 "ffdhe3072", 00:22:22.461 "ffdhe4096", 00:22:22.461 "ffdhe6144", 00:22:22.461 "ffdhe8192" 00:22:22.461 ] 00:22:22.461 } 00:22:22.461 }, 00:22:22.461 { 00:22:22.461 "method": "bdev_nvme_set_hotplug", 00:22:22.461 "params": { 00:22:22.461 "period_us": 100000, 00:22:22.461 "enable": false 00:22:22.461 } 00:22:22.461 }, 00:22:22.461 { 00:22:22.461 "method": "bdev_malloc_create", 00:22:22.461 "params": { 00:22:22.461 "name": "malloc0", 00:22:22.461 "num_blocks": 8192, 00:22:22.461 "block_size": 4096, 00:22:22.461 "physical_block_size": 4096, 00:22:22.461 "uuid": "ed4a6b2a-2e17-4fb8-bd09-5b409289effd", 00:22:22.461 "optimal_io_boundary": 0, 00:22:22.461 "md_size": 0, 00:22:22.461 "dif_type": 0, 00:22:22.461 "dif_is_head_of_md": false, 00:22:22.461 "dif_pi_format": 0 00:22:22.461 } 00:22:22.461 }, 00:22:22.461 { 00:22:22.461 "method": "bdev_wait_for_examine" 00:22:22.461 } 00:22:22.461 ] 00:22:22.461 }, 00:22:22.461 { 00:22:22.461 "subsystem": "nbd", 00:22:22.461 "config": [] 00:22:22.461 }, 00:22:22.461 { 00:22:22.461 "subsystem": "scheduler", 00:22:22.461 "config": [ 00:22:22.461 { 00:22:22.462 "method": "framework_set_scheduler", 00:22:22.462 "params": { 00:22:22.462 "name": "static" 00:22:22.462 } 00:22:22.462 } 00:22:22.462 ] 00:22:22.462 }, 00:22:22.462 { 00:22:22.462 "subsystem": "nvmf", 00:22:22.462 "config": [ 00:22:22.462 { 00:22:22.462 "method": "nvmf_set_config", 00:22:22.462 "params": { 00:22:22.462 "discovery_filter": "match_any", 00:22:22.462 "admin_cmd_passthru": { 00:22:22.462 "identify_ctrlr": false 00:22:22.462 } 00:22:22.462 } 00:22:22.462 }, 00:22:22.462 { 00:22:22.462 "method": "nvmf_set_max_subsystems", 00:22:22.462 "params": { 00:22:22.462 "max_subsystems": 1024 00:22:22.462 } 00:22:22.462 }, 00:22:22.462 { 00:22:22.462 "method": "nvmf_set_crdt", 00:22:22.462 "params": { 00:22:22.462 "crdt1": 0, 00:22:22.462 "crdt2": 0, 00:22:22.462 "crdt3": 0 00:22:22.462 } 00:22:22.462 }, 00:22:22.462 { 00:22:22.462 "method": "nvmf_create_transport", 00:22:22.462 "params": { 00:22:22.462 "trtype": "TCP", 00:22:22.462 "max_queue_depth": 128, 00:22:22.462 "max_io_qpairs_per_ctrlr": 127, 00:22:22.462 "in_capsule_data_size": 4096, 00:22:22.462 "max_io_size": 131072, 00:22:22.462 "io_unit_size": 131072, 00:22:22.462 "max_aq_depth": 128, 00:22:22.462 "num_shared_buffers": 511, 00:22:22.462 "buf_cache_size": 4294967295, 00:22:22.462 "dif_insert_or_strip": false, 00:22:22.462 "zcopy": false, 00:22:22.462 "c2h_success": false, 00:22:22.462 "sock_priority": 0, 00:22:22.462 "abort_timeout_sec": 1, 00:22:22.462 " 23:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:22.462 ack_timeout": 0, 00:22:22.462 "data_wr_pool_size": 0 00:22:22.462 } 00:22:22.462 }, 00:22:22.462 { 00:22:22.462 "method": "nvmf_create_subsystem", 00:22:22.462 "params": { 00:22:22.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.462 "allow_any_host": false, 00:22:22.462 "serial_number": "00000000000000000000", 00:22:22.462 "model_number": "SPDK bdev Controller", 00:22:22.462 "max_namespaces": 32, 00:22:22.462 "min_cntlid": 1, 00:22:22.462 "max_cntlid": 65519, 00:22:22.462 "ana_reporting": false 00:22:22.462 } 00:22:22.462 }, 00:22:22.462 { 00:22:22.462 "method": "nvmf_subsystem_add_host", 00:22:22.462 "params": { 00:22:22.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.462 "host": "nqn.2016-06.io.spdk:host1", 00:22:22.462 "psk": "key0" 00:22:22.462 } 00:22:22.462 }, 00:22:22.462 { 00:22:22.462 "method": "nvmf_subsystem_add_ns", 00:22:22.462 "params": { 00:22:22.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.462 "namespace": { 00:22:22.462 "nsid": 1, 00:22:22.462 "bdev_name": "malloc0", 00:22:22.462 "nguid": "ED4A6B2A2E174FB8BD095B409289EFFD", 00:22:22.462 "uuid": "ed4a6b2a-2e17-4fb8-bd09-5b409289effd", 00:22:22.462 "no_auto_visible": false 00:22:22.462 } 00:22:22.462 } 00:22:22.462 }, 00:22:22.462 { 00:22:22.462 "method": "nvmf_subsystem_add_listener", 00:22:22.462 "params": { 00:22:22.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.462 "listen_address": { 00:22:22.462 "trtype": "TCP", 00:22:22.462 "adrfam": "IPv4", 00:22:22.462 "traddr": "10.0.0.2", 00:22:22.462 "trsvcid": "4420" 00:22:22.462 }, 00:22:22.462 "secure_channel": false, 00:22:22.462 "sock_impl": "ssl" 00:22:22.462 } 00:22:22.462 } 00:22:22.462 ] 00:22:22.462 } 00:22:22.462 ] 00:22:22.462 }' 00:22:22.462 23:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:22.462 23:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.462 23:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1423675 00:22:22.462 23:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:22.462 23:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1423675 00:22:22.462 23:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1423675 ']' 00:22:22.462 23:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.462 23:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:22.462 23:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.462 23:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:22.462 23:28:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.722 [2024-07-25 23:28:20.197927] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:22.722 [2024-07-25 23:28:20.198013] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.722 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.722 [2024-07-25 23:28:20.235092] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:22.722 [2024-07-25 23:28:20.267897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.722 [2024-07-25 23:28:20.354701] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.722 [2024-07-25 23:28:20.354766] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.722 [2024-07-25 23:28:20.354793] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.722 [2024-07-25 23:28:20.354807] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.722 [2024-07-25 23:28:20.354819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.722 [2024-07-25 23:28:20.354900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.981 [2024-07-25 23:28:20.595995] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.981 [2024-07-25 23:28:20.633591] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:22.981 [2024-07-25 23:28:20.633854] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.547 23:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:23.547 23:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:23.547 23:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:23.547 23:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:23.547 23:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.547 23:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.547 23:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1423826 00:22:23.547 23:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1423826 /var/tmp/bdevperf.sock 00:22:23.547 23:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1423826 ']' 00:22:23.548 23:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.548 23:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:23.548 23:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:23.548 23:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.548 23:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:22:23.548 "subsystems": [ 00:22:23.548 { 00:22:23.548 "subsystem": "keyring", 00:22:23.548 "config": [ 00:22:23.548 { 00:22:23.548 "method": "keyring_file_add_key", 00:22:23.548 "params": { 00:22:23.548 "name": "key0", 00:22:23.548 "path": "/tmp/tmp.s70BQFaoie" 00:22:23.548 } 00:22:23.548 } 00:22:23.548 ] 00:22:23.548 }, 00:22:23.548 { 00:22:23.548 "subsystem": "iobuf", 00:22:23.548 "config": [ 00:22:23.548 { 00:22:23.548 "method": "iobuf_set_options", 00:22:23.548 "params": { 00:22:23.548 "small_pool_count": 8192, 00:22:23.548 "large_pool_count": 1024, 00:22:23.548 "small_bufsize": 8192, 00:22:23.548 "large_bufsize": 135168 00:22:23.548 } 00:22:23.548 } 00:22:23.548 ] 00:22:23.548 }, 00:22:23.548 { 00:22:23.548 "subsystem": "sock", 00:22:23.548 "config": [ 00:22:23.548 { 00:22:23.548 "method": "sock_set_default_impl", 00:22:23.548 "params": { 00:22:23.548 "impl_name": "posix" 00:22:23.548 } 00:22:23.548 }, 00:22:23.548 { 00:22:23.548 "method": "sock_impl_set_options", 00:22:23.548 "params": { 00:22:23.548 "impl_name": "ssl", 00:22:23.548 "recv_buf_size": 4096, 00:22:23.548 "send_buf_size": 4096, 00:22:23.548 "enable_recv_pipe": true, 00:22:23.548 "enable_quickack": false, 00:22:23.548 "enable_placement_id": 0, 00:22:23.548 "enable_zerocopy_send_server": true, 00:22:23.548 "enable_zerocopy_send_client": false, 00:22:23.548 "zerocopy_threshold": 0, 00:22:23.548 "tls_version": 0, 00:22:23.548 "enable_ktls": false 00:22:23.548 } 00:22:23.548 }, 00:22:23.548 { 00:22:23.548 "method": "sock_impl_set_options", 00:22:23.548 "params": { 00:22:23.548 "impl_name": "posix", 00:22:23.548 "recv_buf_size": 2097152, 00:22:23.548 "send_buf_size": 2097152, 00:22:23.548 "enable_recv_pipe": true, 00:22:23.548 "enable_quickack": false, 00:22:23.548 "enable_placement_id": 0, 00:22:23.548 "enable_zerocopy_send_server": true, 00:22:23.548 "enable_zerocopy_send_client": false, 00:22:23.548 "zerocopy_threshold": 0, 00:22:23.548 "tls_version": 0, 00:22:23.548 "enable_ktls": false 00:22:23.548 } 00:22:23.548 } 00:22:23.548 ] 00:22:23.548 }, 00:22:23.548 { 00:22:23.548 "subsystem": "vmd", 00:22:23.548 "config": [] 00:22:23.548 }, 00:22:23.548 { 00:22:23.548 "subsystem": "accel", 00:22:23.548 "config": [ 00:22:23.548 { 00:22:23.548 "method": "accel_set_options", 00:22:23.548 "params": { 00:22:23.548 "small_cache_size": 128, 00:22:23.548 "large_cache_size": 16, 00:22:23.548 "task_count": 2048, 00:22:23.548 "sequence_count": 2048, 00:22:23.548 "buf_count": 2048 00:22:23.548 } 00:22:23.548 } 00:22:23.548 ] 00:22:23.548 }, 00:22:23.548 { 00:22:23.548 "subsystem": "bdev", 00:22:23.548 "config": [ 00:22:23.548 { 00:22:23.548 "method": "bdev_set_options", 00:22:23.548 "params": { 00:22:23.548 "bdev_io_pool_size": 65535, 00:22:23.548 "bdev_io_cache_size": 256, 00:22:23.548 "bdev_auto_examine": true, 00:22:23.548 "iobuf_small_cache_size": 128, 00:22:23.548 "iobuf_large_cache_size": 16 00:22:23.548 } 00:22:23.548 }, 00:22:23.548 { 00:22:23.548 "method": "bdev_raid_set_options", 00:22:23.548 "params": { 00:22:23.548 "process_window_size_kb": 1024, 00:22:23.548 "process_max_bandwidth_mb_sec": 0 00:22:23.548 } 00:22:23.548 }, 00:22:23.548 { 00:22:23.548 "method": "bdev_iscsi_set_options", 00:22:23.548 "params": { 00:22:23.548 "timeout_sec": 30 00:22:23.548 } 00:22:23.548 }, 00:22:23.548 { 00:22:23.548 "method": "bdev_nvme_set_options", 00:22:23.548 "params": { 00:22:23.548 "action_on_timeout": "none", 00:22:23.548 "timeout_us": 0, 00:22:23.548 "timeout_admin_us": 0, 00:22:23.548 "keep_alive_timeout_ms": 10000, 00:22:23.548 "arbitration_burst": 0, 00:22:23.548 "low_priority_weight": 0, 00:22:23.548 "medium_priority_weight": 0, 00:22:23.548 "high_priority_weight": 0, 00:22:23.548 "nvme_adminq_poll_period_us": 10000, 00:22:23.548 "nvme_ioq_poll_period_us": 0, 00:22:23.548 "io_queue_requests": 512, 00:22:23.548 "delay_cmd_submit": true, 00:22:23.548 "transport_retry_count": 4, 00:22:23.548 "bdev_retry_count": 3, 00:22:23.548 "transport_ack_timeout": 0, 00:22:23.548 "ctrlr_loss_timeout_sec": 0, 00:22:23.548 "reconnect_delay_sec": 0, 00:22:23.548 "fast_io_fail_timeout_sec": 0, 00:22:23.548 "disable_auto_failback": false, 00:22:23.548 "generate_uuids": false, 00:22:23.548 "transport_tos": 0, 00:22:23.548 "nvme_error_stat": false, 00:22:23.548 "rdma_srq_size": 0, 00:22:23.548 "io_path_stat": false, 00:22:23.548 "allow_accel_sequence": false, 00:22:23.548 "rdma_max_cq_size": 0, 00:22:23.548 "rdma_cm_event_timeout_ms": 0, 00:22:23.548 "dhchap_digests": [ 00:22:23.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.548 "sha256", 00:22:23.548 "sha384", 00:22:23.548 "sha512" 00:22:23.548 ], 00:22:23.548 "dhchap_dhgroups": [ 00:22:23.548 "null", 00:22:23.548 "ffdhe2048", 00:22:23.548 "ffdhe3072", 00:22:23.548 "ffdhe4096", 00:22:23.548 "ffdhe6144", 00:22:23.548 "ffdhe8192" 00:22:23.548 ] 00:22:23.548 } 00:22:23.548 }, 00:22:23.548 { 00:22:23.548 "method": "bdev_nvme_attach_controller", 00:22:23.548 "params": { 00:22:23.548 "name": "nvme0", 00:22:23.548 "trtype": "TCP", 00:22:23.548 "adrfam": "IPv4", 00:22:23.548 "traddr": "10.0.0.2", 00:22:23.548 "trsvcid": "4420", 00:22:23.548 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.548 "prchk_reftag": false, 00:22:23.548 "prchk_guard": false, 00:22:23.548 "ctrlr_loss_timeout_sec": 0, 00:22:23.548 "reconnect_delay_sec": 0, 00:22:23.548 "fast_io_fail_timeout_sec": 0, 00:22:23.548 "psk": "key0", 00:22:23.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:23.548 "hdgst": false, 00:22:23.548 "ddgst": false 00:22:23.548 } 00:22:23.548 }, 00:22:23.548 { 00:22:23.549 "method": "bdev_nvme_set_hotplug", 00:22:23.549 "params": { 00:22:23.549 "period_us": 100000, 00:22:23.549 "enable": false 00:22:23.549 } 00:22:23.549 }, 00:22:23.549 { 00:22:23.549 "method": "bdev_enable_histogram", 00:22:23.549 "params": { 00:22:23.549 "name": "nvme0n1", 00:22:23.549 "enable": true 00:22:23.549 } 00:22:23.549 }, 00:22:23.549 { 00:22:23.549 "method": "bdev_wait_for_examine" 00:22:23.549 } 00:22:23.549 ] 00:22:23.549 }, 00:22:23.549 { 00:22:23.549 "subsystem": "nbd", 00:22:23.549 "config": [] 00:22:23.549 } 00:22:23.549 ] 00:22:23.549 }' 00:22:23.549 23:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:23.549 23:28:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.549 [2024-07-25 23:28:21.235804] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:23.549 [2024-07-25 23:28:21.235875] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1423826 ] 00:22:23.549 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.549 [2024-07-25 23:28:21.269574] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:23.807 [2024-07-25 23:28:21.298212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.807 [2024-07-25 23:28:21.389065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.067 [2024-07-25 23:28:21.570834] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:24.631 23:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:24.631 23:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:24.631 23:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:24.631 23:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:22:24.888 23:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.888 23:28:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:24.888 Running I/O for 1 seconds... 00:22:26.263 00:22:26.263 Latency(us) 00:22:26.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.263 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:26.263 Verification LBA range: start 0x0 length 0x2000 00:22:26.263 nvme0n1 : 1.02 3329.65 13.01 0.00 0.00 38057.21 8786.68 36505.98 00:22:26.263 =================================================================================================================== 00:22:26.263 Total : 3329.65 13.01 0.00 0.00 38057.21 8786.68 36505.98 00:22:26.263 0 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:26.263 nvmf_trace.0 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1423826 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1423826 ']' 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1423826 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1423826 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1423826' 00:22:26.263 killing process with pid 1423826 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1423826 00:22:26.263 Received shutdown signal, test time was about 1.000000 seconds 00:22:26.263 00:22:26.263 Latency(us) 00:22:26.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.263 =================================================================================================================== 00:22:26.263 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1423826 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:26.263 23:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:26.263 rmmod nvme_tcp 00:22:26.521 rmmod nvme_fabrics 00:22:26.521 rmmod nvme_keyring 00:22:26.521 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:26.521 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:26.521 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:26.521 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1423675 ']' 00:22:26.521 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1423675 00:22:26.521 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1423675 ']' 00:22:26.521 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1423675 00:22:26.521 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:26.521 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.521 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1423675 00:22:26.521 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:26.521 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:26.521 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1423675' 00:22:26.521 killing process with pid 1423675 00:22:26.521 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1423675 00:22:26.521 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1423675 00:22:26.781 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:26.781 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:26.781 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:26.781 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:26.781 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:26.781 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.781 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.781 23:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.687 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:28.687 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.5oiS6RtMO9 /tmp/tmp.ANANygW422 /tmp/tmp.s70BQFaoie 00:22:28.687 00:22:28.687 real 1m18.917s 00:22:28.687 user 2m9.110s 00:22:28.687 sys 0m24.623s 00:22:28.687 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:28.687 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.687 ************************************ 00:22:28.687 END TEST nvmf_tls 00:22:28.687 ************************************ 00:22:28.687 23:28:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:28.687 23:28:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:28.687 23:28:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:28.687 23:28:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:28.687 ************************************ 00:22:28.687 START TEST nvmf_fips 00:22:28.687 ************************************ 00:22:28.687 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:28.947 * Looking for test storage... 00:22:28.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:28.947 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:22:28.948 Error setting digest 00:22:28.948 00D2A741157F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:28.948 00D2A741157F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:28.948 23:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:30.858 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:30.858 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:30.858 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:30.858 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:30.858 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.859 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:31.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:22:31.123 00:22:31.123 --- 10.0.0.2 ping statistics --- 00:22:31.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.123 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:22:31.123 00:22:31.123 --- 10.0.0.1 ping statistics --- 00:22:31.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.123 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1426067 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1426067 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1426067 ']' 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.123 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:31.124 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.124 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:31.124 23:28:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:31.124 [2024-07-25 23:28:28.761664] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:31.124 [2024-07-25 23:28:28.761757] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.124 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.124 [2024-07-25 23:28:28.801226] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:31.124 [2024-07-25 23:28:28.832214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.382 [2024-07-25 23:28:28.923960] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.382 [2024-07-25 23:28:28.924019] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.382 [2024-07-25 23:28:28.924036] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.382 [2024-07-25 23:28:28.924050] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.382 [2024-07-25 23:28:28.924070] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.382 [2024-07-25 23:28:28.924120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.382 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.382 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:31.382 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:31.382 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:31.382 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:31.382 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.382 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:31.382 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:31.382 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:31.382 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:31.382 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:31.382 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:31.382 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:31.382 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:31.641 [2024-07-25 23:28:29.312096] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.641 [2024-07-25 23:28:29.328093] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:31.641 [2024-07-25 23:28:29.328342] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.641 [2024-07-25 23:28:29.360047] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:31.641 malloc0 00:22:31.901 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:31.901 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1426218 00:22:31.901 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:31.901 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1426218 /var/tmp/bdevperf.sock 00:22:31.901 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1426218 ']' 00:22:31.901 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.901 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:31.901 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.901 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:31.901 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:31.901 [2024-07-25 23:28:29.454838] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:22:31.901 [2024-07-25 23:28:29.454927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1426218 ] 00:22:31.901 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.901 [2024-07-25 23:28:29.486618] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:31.901 [2024-07-25 23:28:29.513705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.901 [2024-07-25 23:28:29.603335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.160 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.160 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:32.160 23:28:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:32.418 [2024-07-25 23:28:29.996635] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.418 [2024-07-25 23:28:29.996768] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:32.418 TLSTESTn1 00:22:32.418 23:28:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:32.677 Running I/O for 10 seconds... 00:22:42.653 00:22:42.653 Latency(us) 00:22:42.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.653 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:42.653 Verification LBA range: start 0x0 length 0x2000 00:22:42.653 TLSTESTn1 : 10.02 3572.58 13.96 0.00 0.00 35761.70 10388.67 47962.64 00:22:42.653 =================================================================================================================== 00:22:42.653 Total : 3572.58 13.96 0.00 0.00 35761.70 10388.67 47962.64 00:22:42.653 0 00:22:42.653 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:42.653 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:42.653 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:22:42.653 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:22:42.653 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:42.653 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:42.654 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:42.654 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:42.654 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:42.654 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:42.654 nvmf_trace.0 00:22:42.654 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:22:42.654 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1426218 00:22:42.654 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1426218 ']' 00:22:42.654 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1426218 00:22:42.654 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:42.654 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:42.654 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1426218 00:22:42.913 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:42.913 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:42.913 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1426218' 00:22:42.913 killing process with pid 1426218 00:22:42.913 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1426218 00:22:42.913 Received shutdown signal, test time was about 10.000000 seconds 00:22:42.913 00:22:42.913 Latency(us) 00:22:42.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.913 =================================================================================================================== 00:22:42.913 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:42.913 [2024-07-25 23:28:40.387240] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:42.913 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1426218 00:22:42.913 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:42.913 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:42.913 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:42.913 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:42.913 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:42.913 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:42.913 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:42.913 rmmod nvme_tcp 00:22:42.913 rmmod nvme_fabrics 00:22:43.173 rmmod nvme_keyring 00:22:43.173 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:43.173 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:43.173 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:43.173 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1426067 ']' 00:22:43.173 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1426067 00:22:43.173 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1426067 ']' 00:22:43.173 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1426067 00:22:43.173 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:43.173 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:43.174 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1426067 00:22:43.174 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:43.174 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:43.174 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1426067' 00:22:43.174 killing process with pid 1426067 00:22:43.174 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1426067 00:22:43.174 [2024-07-25 23:28:40.697410] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:43.174 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1426067 00:22:43.434 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:43.434 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:43.434 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:43.434 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:43.435 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:43.435 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.435 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.435 23:28:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.340 23:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:45.340 23:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:45.340 00:22:45.340 real 0m16.603s 00:22:45.340 user 0m21.636s 00:22:45.340 sys 0m5.357s 00:22:45.340 23:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:45.340 23:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:45.340 ************************************ 00:22:45.340 END TEST nvmf_fips 00:22:45.340 ************************************ 00:22:45.340 23:28:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:22:45.340 23:28:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:45.340 23:28:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:45.340 23:28:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:45.340 23:28:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:45.340 ************************************ 00:22:45.340 START TEST nvmf_fuzz 00:22:45.340 ************************************ 00:22:45.340 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:45.601 * Looking for test storage... 00:22:45.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.601 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:22:45.602 23:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:47.501 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:47.501 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.501 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:47.502 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:47.502 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:47.502 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:47.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:22:47.760 00:22:47.760 --- 10.0.0.2 ping statistics --- 00:22:47.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.760 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:22:47.760 00:22:47.760 --- 10.0.0.1 ping statistics --- 00:22:47.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.760 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1429455 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1429455 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1429455 ']' 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:47.760 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:48.019 Malloc0 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:22:48.019 23:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:20.086 Fuzzing completed. Shutting down the fuzz application 00:23:20.086 00:23:20.086 Dumping successful admin opcodes: 00:23:20.086 8, 9, 10, 24, 00:23:20.086 Dumping successful io opcodes: 00:23:20.086 0, 9, 00:23:20.086 NS: 0x200003aeff00 I/O qp, Total commands completed: 442197, total successful commands: 2575, random_seed: 821114240 00:23:20.086 NS: 0x200003aeff00 admin qp, Total commands completed: 55408, total successful commands: 443, random_seed: 1250008576 00:23:20.086 23:29:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:20.086 Fuzzing completed. Shutting down the fuzz application 00:23:20.086 00:23:20.086 Dumping successful admin opcodes: 00:23:20.086 24, 00:23:20.086 Dumping successful io opcodes: 00:23:20.086 00:23:20.086 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 4068293664 00:23:20.086 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 4068403029 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:20.086 rmmod nvme_tcp 00:23:20.086 rmmod nvme_fabrics 00:23:20.086 rmmod nvme_keyring 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1429455 ']' 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1429455 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1429455 ']' 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 1429455 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1429455 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1429455' 00:23:20.086 killing process with pid 1429455 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 1429455 00:23:20.086 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 1429455 00:23:20.345 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:20.345 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:20.345 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:20.345 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:20.345 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:20.345 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.345 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.345 23:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.271 23:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:22.271 23:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:22.271 00:23:22.271 real 0m36.914s 00:23:22.271 user 0m50.992s 00:23:22.271 sys 0m15.261s 00:23:22.271 23:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:22.271 23:29:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:22.271 ************************************ 00:23:22.271 END TEST nvmf_fuzz 00:23:22.271 ************************************ 00:23:22.271 23:29:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:22.271 23:29:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:22.271 23:29:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:22.271 23:29:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:22.533 ************************************ 00:23:22.533 START TEST nvmf_multiconnection 00:23:22.533 ************************************ 00:23:22.533 23:29:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:22.533 * Looking for test storage... 00:23:22.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.533 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:22.534 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:22.534 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:23:22.534 23:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.438 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:24.439 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:24.439 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:24.439 23:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:24.439 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:24.439 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:24.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:23:24.439 00:23:24.439 --- 10.0.0.2 ping statistics --- 00:23:24.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.439 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:23:24.439 00:23:24.439 --- 10.0.0.1 ping statistics --- 00:23:24.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.439 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.439 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:24.440 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:24.440 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.440 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:24.440 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:24.700 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:24.700 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:24.700 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:24.700 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.700 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1435062 00:23:24.700 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:24.700 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1435062 00:23:24.700 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 1435062 ']' 00:23:24.700 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.700 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:24.700 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.700 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:24.700 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.700 [2024-07-25 23:29:22.220425] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:23:24.700 [2024-07-25 23:29:22.220517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.700 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.700 [2024-07-25 23:29:22.258680] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:24.700 [2024-07-25 23:29:22.292055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:24.700 [2024-07-25 23:29:22.384173] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.700 [2024-07-25 23:29:22.384229] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.700 [2024-07-25 23:29:22.384255] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.700 [2024-07-25 23:29:22.384273] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.700 [2024-07-25 23:29:22.384285] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.700 [2024-07-25 23:29:22.384372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.700 [2024-07-25 23:29:22.384426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.700 [2024-07-25 23:29:22.384539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:24.700 [2024-07-25 23:29:22.384541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.959 [2024-07-25 23:29:22.538573] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.959 Malloc1 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.959 [2024-07-25 23:29:22.594135] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.959 Malloc2 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.959 Malloc3 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.959 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.218 Malloc4 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.218 Malloc5 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.218 Malloc6 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.218 Malloc7 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:25.218 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.219 Malloc8 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.219 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.477 Malloc9 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.477 Malloc10 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.477 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:25.478 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.478 23:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.478 Malloc11 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:25.478 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:26.045 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:26.045 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:26.045 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:26.045 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:26.045 23:29:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:27.948 23:29:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:27.949 23:29:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:27.949 23:29:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:23:28.206 23:29:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:28.206 23:29:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:28.206 23:29:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:28.206 23:29:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:28.206 23:29:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:23:28.771 23:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:28.771 23:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:28.771 23:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:28.771 23:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:28.771 23:29:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:30.675 23:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:30.675 23:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:30.675 23:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:23:30.675 23:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:30.675 23:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:30.675 23:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:30.675 23:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.675 23:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:23:31.241 23:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:31.241 23:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:31.241 23:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:31.241 23:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:31.241 23:29:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:33.778 23:29:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:33.778 23:29:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:33.778 23:29:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:23:33.778 23:29:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:33.778 23:29:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:33.778 23:29:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:33.778 23:29:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.779 23:29:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:23:34.037 23:29:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:34.037 23:29:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:34.037 23:29:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:34.037 23:29:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:34.037 23:29:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:36.569 23:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:36.569 23:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:36.569 23:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:23:36.569 23:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:36.569 23:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:36.569 23:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:36.569 23:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:36.569 23:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:23:36.825 23:29:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:36.825 23:29:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:36.825 23:29:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:36.825 23:29:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:36.825 23:29:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:38.730 23:29:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:38.730 23:29:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:38.730 23:29:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:23:38.730 23:29:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:38.730 23:29:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:38.730 23:29:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:38.730 23:29:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:38.730 23:29:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:23:39.669 23:29:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:39.669 23:29:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:39.669 23:29:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:39.669 23:29:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:39.669 23:29:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:41.567 23:29:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:41.567 23:29:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:41.567 23:29:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:23:41.567 23:29:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:41.567 23:29:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:41.567 23:29:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:41.567 23:29:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:41.567 23:29:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:23:42.501 23:29:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:42.501 23:29:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:42.501 23:29:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:42.501 23:29:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:42.501 23:29:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:44.437 23:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:44.437 23:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:44.437 23:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:23:44.437 23:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:44.437 23:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:44.437 23:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:44.437 23:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:44.437 23:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:23:45.372 23:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:45.372 23:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:45.372 23:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:45.373 23:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:45.373 23:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:47.272 23:29:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:47.272 23:29:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:47.272 23:29:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:23:47.272 23:29:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:47.272 23:29:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:47.272 23:29:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:47.272 23:29:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:47.272 23:29:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:23:47.838 23:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:47.838 23:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:47.838 23:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:47.838 23:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:47.838 23:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:50.370 23:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:50.370 23:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:50.370 23:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:23:50.370 23:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:50.370 23:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:50.370 23:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:50.370 23:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.370 23:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:23:50.936 23:29:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:50.936 23:29:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:50.936 23:29:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:50.936 23:29:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:50.936 23:29:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:52.831 23:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:52.831 23:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:52.831 23:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:23:52.831 23:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:52.831 23:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:52.831 23:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:52.831 23:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:52.831 23:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:23:53.764 23:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:23:53.764 23:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:53.764 23:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:53.764 23:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:53.764 23:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:55.661 23:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:55.661 23:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:55.661 23:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:23:55.661 23:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:55.661 23:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:55.661 23:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:55.661 23:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:55.661 [global] 00:23:55.661 thread=1 00:23:55.661 invalidate=1 00:23:55.661 rw=read 00:23:55.661 time_based=1 00:23:55.661 runtime=10 00:23:55.661 ioengine=libaio 00:23:55.661 direct=1 00:23:55.661 bs=262144 00:23:55.661 iodepth=64 00:23:55.661 norandommap=1 00:23:55.661 numjobs=1 00:23:55.661 00:23:55.661 [job0] 00:23:55.661 filename=/dev/nvme0n1 00:23:55.661 [job1] 00:23:55.661 filename=/dev/nvme10n1 00:23:55.661 [job2] 00:23:55.661 filename=/dev/nvme1n1 00:23:55.661 [job3] 00:23:55.661 filename=/dev/nvme2n1 00:23:55.661 [job4] 00:23:55.661 filename=/dev/nvme3n1 00:23:55.661 [job5] 00:23:55.661 filename=/dev/nvme4n1 00:23:55.661 [job6] 00:23:55.661 filename=/dev/nvme5n1 00:23:55.661 [job7] 00:23:55.661 filename=/dev/nvme6n1 00:23:55.661 [job8] 00:23:55.661 filename=/dev/nvme7n1 00:23:55.661 [job9] 00:23:55.661 filename=/dev/nvme8n1 00:23:55.661 [job10] 00:23:55.661 filename=/dev/nvme9n1 00:23:55.917 Could not set queue depth (nvme0n1) 00:23:55.917 Could not set queue depth (nvme10n1) 00:23:55.917 Could not set queue depth (nvme1n1) 00:23:55.917 Could not set queue depth (nvme2n1) 00:23:55.917 Could not set queue depth (nvme3n1) 00:23:55.917 Could not set queue depth (nvme4n1) 00:23:55.917 Could not set queue depth (nvme5n1) 00:23:55.917 Could not set queue depth (nvme6n1) 00:23:55.917 Could not set queue depth (nvme7n1) 00:23:55.917 Could not set queue depth (nvme8n1) 00:23:55.917 Could not set queue depth (nvme9n1) 00:23:55.917 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.917 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.917 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.917 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.917 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.917 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.917 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.917 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.917 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.917 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.917 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:55.917 fio-3.35 00:23:55.917 Starting 11 threads 00:24:08.122 00:24:08.122 job0: (groupid=0, jobs=1): err= 0: pid=1439310: Thu Jul 25 23:30:04 2024 00:24:08.122 read: IOPS=596, BW=149MiB/s (156MB/s)(1504MiB/10085msec) 00:24:08.122 slat (usec): min=9, max=104793, avg=1269.88, stdev=4709.08 00:24:08.122 clat (usec): min=1628, max=235962, avg=105900.42, stdev=53772.73 00:24:08.122 lat (usec): min=1646, max=260707, avg=107170.30, stdev=54621.48 00:24:08.122 clat percentiles (msec): 00:24:08.122 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 29], 20.00th=[ 51], 00:24:08.122 | 30.00th=[ 75], 40.00th=[ 95], 50.00th=[ 112], 60.00th=[ 127], 00:24:08.122 | 70.00th=[ 140], 80.00th=[ 155], 90.00th=[ 176], 95.00th=[ 188], 00:24:08.122 | 99.00th=[ 207], 99.50th=[ 215], 99.90th=[ 228], 99.95th=[ 236], 00:24:08.122 | 99.99th=[ 236] 00:24:08.122 bw ( KiB/s): min=91648, max=344576, per=8.27%, avg=152432.30, stdev=71111.79, samples=20 00:24:08.122 iops : min= 358, max= 1346, avg=595.40, stdev=277.81, samples=20 00:24:08.122 lat (msec) : 2=0.25%, 4=0.47%, 10=1.89%, 20=4.75%, 50=12.33% 00:24:08.122 lat (msec) : 100=22.84%, 250=57.47% 00:24:08.122 cpu : usr=0.34%, sys=1.56%, ctx=1154, majf=0, minf=4097 00:24:08.122 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:08.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:08.122 issued rwts: total=6017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.122 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:08.122 job1: (groupid=0, jobs=1): err= 0: pid=1439311: Thu Jul 25 23:30:04 2024 00:24:08.122 read: IOPS=655, BW=164MiB/s (172MB/s)(1655MiB/10095msec) 00:24:08.122 slat (usec): min=9, max=101947, avg=932.98, stdev=4156.19 00:24:08.122 clat (usec): min=1274, max=248739, avg=96559.08, stdev=47716.48 00:24:08.122 lat (usec): min=1319, max=283973, avg=97492.06, stdev=48186.16 00:24:08.122 clat percentiles (msec): 00:24:08.122 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 31], 20.00th=[ 61], 00:24:08.122 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 95], 60.00th=[ 106], 00:24:08.122 | 70.00th=[ 117], 80.00th=[ 136], 90.00th=[ 165], 95.00th=[ 182], 00:24:08.122 | 99.00th=[ 211], 99.50th=[ 220], 99.90th=[ 232], 99.95th=[ 239], 00:24:08.122 | 99.99th=[ 249] 00:24:08.122 bw ( KiB/s): min=91136, max=250368, per=9.11%, avg=167859.20, stdev=45018.40, samples=20 00:24:08.122 iops : min= 356, max= 978, avg=655.70, stdev=175.85, samples=20 00:24:08.122 lat (msec) : 2=0.23%, 4=1.16%, 10=2.17%, 20=4.71%, 50=6.39% 00:24:08.122 lat (msec) : 100=40.67%, 250=44.66% 00:24:08.122 cpu : usr=0.32%, sys=1.83%, ctx=1394, majf=0, minf=4097 00:24:08.122 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:08.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:08.122 issued rwts: total=6621,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.122 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:08.122 job2: (groupid=0, jobs=1): err= 0: pid=1439312: Thu Jul 25 23:30:04 2024 00:24:08.122 read: IOPS=877, BW=219MiB/s (230MB/s)(2206MiB/10054msec) 00:24:08.122 slat (usec): min=9, max=104095, avg=765.11, stdev=3419.91 00:24:08.122 clat (usec): min=1631, max=261878, avg=72099.49, stdev=42416.35 00:24:08.122 lat (usec): min=1658, max=301999, avg=72864.60, stdev=42845.93 00:24:08.122 clat percentiles (msec): 00:24:08.122 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 20], 20.00th=[ 35], 00:24:08.122 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 67], 60.00th=[ 79], 00:24:08.122 | 70.00th=[ 92], 80.00th=[ 104], 90.00th=[ 117], 95.00th=[ 155], 00:24:08.122 | 99.00th=[ 209], 99.50th=[ 222], 99.90th=[ 241], 99.95th=[ 253], 00:24:08.122 | 99.99th=[ 262] 00:24:08.122 bw ( KiB/s): min=114176, max=446976, per=12.18%, avg=224307.20, stdev=84093.43, samples=20 00:24:08.122 iops : min= 446, max= 1746, avg=876.20, stdev=328.49, samples=20 00:24:08.122 lat (msec) : 2=0.02%, 4=0.68%, 10=3.21%, 20=6.12%, 50=21.94% 00:24:08.122 lat (msec) : 100=45.00%, 250=22.96%, 500=0.08% 00:24:08.122 cpu : usr=0.47%, sys=2.22%, ctx=1415, majf=0, minf=4097 00:24:08.122 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:08.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:08.122 issued rwts: total=8825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.122 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:08.122 job3: (groupid=0, jobs=1): err= 0: pid=1439313: Thu Jul 25 23:30:04 2024 00:24:08.122 read: IOPS=495, BW=124MiB/s (130MB/s)(1248MiB/10069msec) 00:24:08.122 slat (usec): min=10, max=72246, avg=1740.42, stdev=5494.42 00:24:08.122 clat (msec): min=6, max=245, avg=127.23, stdev=44.01 00:24:08.122 lat (msec): min=7, max=253, avg=128.97, stdev=44.64 00:24:08.122 clat percentiles (msec): 00:24:08.122 | 1.00th=[ 20], 5.00th=[ 35], 10.00th=[ 77], 20.00th=[ 95], 00:24:08.122 | 30.00th=[ 106], 40.00th=[ 116], 50.00th=[ 129], 60.00th=[ 140], 00:24:08.122 | 70.00th=[ 155], 80.00th=[ 169], 90.00th=[ 184], 95.00th=[ 192], 00:24:08.122 | 99.00th=[ 215], 99.50th=[ 222], 99.90th=[ 232], 99.95th=[ 245], 00:24:08.122 | 99.99th=[ 247] 00:24:08.122 bw ( KiB/s): min=83456, max=281600, per=6.85%, avg=126156.80, stdev=47273.70, samples=20 00:24:08.122 iops : min= 326, max= 1100, avg=492.80, stdev=184.66, samples=20 00:24:08.122 lat (msec) : 10=0.26%, 20=0.82%, 50=6.23%, 100=18.09%, 250=74.60% 00:24:08.122 cpu : usr=0.30%, sys=1.68%, ctx=1049, majf=0, minf=4097 00:24:08.122 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:08.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:08.123 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.123 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:08.123 job4: (groupid=0, jobs=1): err= 0: pid=1439314: Thu Jul 25 23:30:04 2024 00:24:08.123 read: IOPS=732, BW=183MiB/s (192MB/s)(1844MiB/10064msec) 00:24:08.123 slat (usec): min=9, max=105758, avg=1101.44, stdev=4485.76 00:24:08.123 clat (usec): min=1189, max=237478, avg=86160.14, stdev=44367.71 00:24:08.123 lat (usec): min=1221, max=279878, avg=87261.59, stdev=44960.40 00:24:08.123 clat percentiles (msec): 00:24:08.123 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 31], 20.00th=[ 49], 00:24:08.123 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 84], 60.00th=[ 95], 00:24:08.123 | 70.00th=[ 109], 80.00th=[ 126], 90.00th=[ 150], 95.00th=[ 163], 00:24:08.123 | 99.00th=[ 184], 99.50th=[ 201], 99.90th=[ 232], 99.95th=[ 232], 00:24:08.123 | 99.99th=[ 239] 00:24:08.123 bw ( KiB/s): min=112128, max=337920, per=10.16%, avg=187187.20, stdev=63205.82, samples=20 00:24:08.123 iops : min= 438, max= 1320, avg=731.20, stdev=246.90, samples=20 00:24:08.123 lat (msec) : 2=0.16%, 4=2.01%, 10=2.47%, 20=2.07%, 50=13.93% 00:24:08.123 lat (msec) : 100=42.83%, 250=36.53% 00:24:08.123 cpu : usr=0.41%, sys=2.12%, ctx=1382, majf=0, minf=4097 00:24:08.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:08.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:08.123 issued rwts: total=7375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.123 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:08.123 job5: (groupid=0, jobs=1): err= 0: pid=1439315: Thu Jul 25 23:30:04 2024 00:24:08.123 read: IOPS=697, BW=174MiB/s (183MB/s)(1748MiB/10026msec) 00:24:08.123 slat (usec): min=8, max=119797, avg=888.53, stdev=4676.13 00:24:08.123 clat (usec): min=1474, max=313802, avg=90801.77, stdev=60199.83 00:24:08.123 lat (usec): min=1502, max=313860, avg=91690.30, stdev=60977.85 00:24:08.123 clat percentiles (msec): 00:24:08.123 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 17], 20.00th=[ 29], 00:24:08.123 | 30.00th=[ 42], 40.00th=[ 67], 50.00th=[ 87], 60.00th=[ 104], 00:24:08.123 | 70.00th=[ 126], 80.00th=[ 150], 90.00th=[ 178], 95.00th=[ 194], 00:24:08.123 | 99.00th=[ 230], 99.50th=[ 239], 99.90th=[ 245], 99.95th=[ 247], 00:24:08.123 | 99.99th=[ 313] 00:24:08.123 bw ( KiB/s): min=70144, max=395264, per=9.63%, avg=177426.50, stdev=73606.87, samples=20 00:24:08.123 iops : min= 274, max= 1544, avg=693.05, stdev=287.52, samples=20 00:24:08.123 lat (msec) : 2=0.03%, 4=0.26%, 10=4.02%, 20=7.51%, 50=22.28% 00:24:08.123 lat (msec) : 100=23.80%, 250=42.10%, 500=0.01% 00:24:08.123 cpu : usr=0.42%, sys=1.81%, ctx=1469, majf=0, minf=4097 00:24:08.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:08.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:08.123 issued rwts: total=6993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.123 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:08.123 job6: (groupid=0, jobs=1): err= 0: pid=1439316: Thu Jul 25 23:30:04 2024 00:24:08.123 read: IOPS=605, BW=151MiB/s (159MB/s)(1517MiB/10029msec) 00:24:08.123 slat (usec): min=8, max=116974, avg=1096.18, stdev=4865.68 00:24:08.123 clat (usec): min=1333, max=314459, avg=104604.97, stdev=54526.43 00:24:08.123 lat (usec): min=1354, max=314489, avg=105701.14, stdev=55214.81 00:24:08.123 clat percentiles (msec): 00:24:08.123 | 1.00th=[ 4], 5.00th=[ 21], 10.00th=[ 31], 20.00th=[ 54], 00:24:08.123 | 30.00th=[ 75], 40.00th=[ 87], 50.00th=[ 102], 60.00th=[ 121], 00:24:08.123 | 70.00th=[ 138], 80.00th=[ 157], 90.00th=[ 178], 95.00th=[ 192], 00:24:08.123 | 99.00th=[ 222], 99.50th=[ 241], 99.90th=[ 262], 99.95th=[ 262], 00:24:08.123 | 99.99th=[ 313] 00:24:08.123 bw ( KiB/s): min=96768, max=251904, per=8.35%, avg=153728.00, stdev=50149.29, samples=20 00:24:08.123 iops : min= 378, max= 984, avg=600.50, stdev=195.90, samples=20 00:24:08.123 lat (msec) : 2=0.16%, 4=1.55%, 10=2.13%, 20=1.12%, 50=13.74% 00:24:08.123 lat (msec) : 100=30.62%, 250=50.49%, 500=0.18% 00:24:08.123 cpu : usr=0.31%, sys=1.60%, ctx=1339, majf=0, minf=4097 00:24:08.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:08.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:08.123 issued rwts: total=6068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.123 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:08.123 job7: (groupid=0, jobs=1): err= 0: pid=1439317: Thu Jul 25 23:30:04 2024 00:24:08.123 read: IOPS=537, BW=134MiB/s (141MB/s)(1355MiB/10089msec) 00:24:08.123 slat (usec): min=9, max=59907, avg=1397.01, stdev=4790.07 00:24:08.123 clat (usec): min=1664, max=249419, avg=117621.38, stdev=47412.82 00:24:08.123 lat (usec): min=1680, max=249437, avg=119018.39, stdev=48171.29 00:24:08.123 clat percentiles (msec): 00:24:08.123 | 1.00th=[ 5], 5.00th=[ 26], 10.00th=[ 50], 20.00th=[ 79], 00:24:08.123 | 30.00th=[ 101], 40.00th=[ 111], 50.00th=[ 122], 60.00th=[ 133], 00:24:08.123 | 70.00th=[ 144], 80.00th=[ 161], 90.00th=[ 178], 95.00th=[ 188], 00:24:08.123 | 99.00th=[ 213], 99.50th=[ 222], 99.90th=[ 241], 99.95th=[ 243], 00:24:08.123 | 99.99th=[ 249] 00:24:08.123 bw ( KiB/s): min=86528, max=266752, per=7.45%, avg=137164.80, stdev=51538.39, samples=20 00:24:08.123 iops : min= 338, max= 1042, avg=535.80, stdev=201.32, samples=20 00:24:08.123 lat (msec) : 2=0.07%, 4=0.66%, 10=1.29%, 20=1.53%, 50=6.90% 00:24:08.123 lat (msec) : 100=19.61%, 250=69.93% 00:24:08.123 cpu : usr=0.35%, sys=1.50%, ctx=1169, majf=0, minf=4097 00:24:08.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:08.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:08.123 issued rwts: total=5421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.123 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:08.123 job8: (groupid=0, jobs=1): err= 0: pid=1439318: Thu Jul 25 23:30:04 2024 00:24:08.123 read: IOPS=709, BW=177MiB/s (186MB/s)(1789MiB/10091msec) 00:24:08.123 slat (usec): min=9, max=166675, avg=1227.01, stdev=4610.82 00:24:08.123 clat (msec): min=2, max=266, avg=88.96, stdev=52.39 00:24:08.123 lat (msec): min=2, max=266, avg=90.18, stdev=53.03 00:24:08.123 clat percentiles (msec): 00:24:08.123 | 1.00th=[ 8], 5.00th=[ 28], 10.00th=[ 31], 20.00th=[ 33], 00:24:08.123 | 30.00th=[ 45], 40.00th=[ 65], 50.00th=[ 90], 60.00th=[ 105], 00:24:08.123 | 70.00th=[ 116], 80.00th=[ 133], 90.00th=[ 163], 95.00th=[ 182], 00:24:08.123 | 99.00th=[ 215], 99.50th=[ 245], 99.90th=[ 262], 99.95th=[ 262], 00:24:08.123 | 99.99th=[ 268] 00:24:08.123 bw ( KiB/s): min=82944, max=493568, per=9.86%, avg=181555.20, stdev=109518.93, samples=20 00:24:08.123 iops : min= 324, max= 1928, avg=709.20, stdev=427.81, samples=20 00:24:08.123 lat (msec) : 4=0.22%, 10=1.43%, 20=1.17%, 50=30.06%, 100=23.98% 00:24:08.123 lat (msec) : 250=42.64%, 500=0.49% 00:24:08.123 cpu : usr=0.33%, sys=2.39%, ctx=1348, majf=0, minf=3721 00:24:08.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:08.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:08.123 issued rwts: total=7155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.123 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:08.123 job9: (groupid=0, jobs=1): err= 0: pid=1439319: Thu Jul 25 23:30:04 2024 00:24:08.123 read: IOPS=818, BW=205MiB/s (214MB/s)(2058MiB/10061msec) 00:24:08.123 slat (usec): min=9, max=81986, avg=731.05, stdev=3816.39 00:24:08.123 clat (usec): min=873, max=277646, avg=77446.43, stdev=53403.04 00:24:08.123 lat (usec): min=895, max=277670, avg=78177.49, stdev=53930.56 00:24:08.123 clat percentiles (msec): 00:24:08.123 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 14], 20.00th=[ 24], 00:24:08.123 | 30.00th=[ 34], 40.00th=[ 59], 50.00th=[ 74], 60.00th=[ 89], 00:24:08.123 | 70.00th=[ 107], 80.00th=[ 127], 90.00th=[ 148], 95.00th=[ 174], 00:24:08.123 | 99.00th=[ 213], 99.50th=[ 230], 99.90th=[ 262], 99.95th=[ 275], 00:24:08.123 | 99.99th=[ 279] 00:24:08.123 bw ( KiB/s): min=111616, max=368128, per=11.35%, avg=209100.80, stdev=64858.69, samples=20 00:24:08.123 iops : min= 436, max= 1438, avg=816.80, stdev=253.35, samples=20 00:24:08.123 lat (usec) : 1000=0.01% 00:24:08.123 lat (msec) : 2=0.29%, 4=2.04%, 10=5.31%, 20=8.91%, 50=19.66% 00:24:08.124 lat (msec) : 100=30.46%, 250=32.99%, 500=0.34% 00:24:08.124 cpu : usr=0.20%, sys=2.08%, ctx=1613, majf=0, minf=4097 00:24:08.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:08.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:08.124 issued rwts: total=8231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.124 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:08.124 job10: (groupid=0, jobs=1): err= 0: pid=1439320: Thu Jul 25 23:30:04 2024 00:24:08.124 read: IOPS=491, BW=123MiB/s (129MB/s)(1236MiB/10061msec) 00:24:08.124 slat (usec): min=9, max=92983, avg=1378.31, stdev=5092.64 00:24:08.124 clat (msec): min=2, max=260, avg=128.82, stdev=44.96 00:24:08.124 lat (msec): min=2, max=260, avg=130.20, stdev=45.41 00:24:08.124 clat percentiles (msec): 00:24:08.124 | 1.00th=[ 8], 5.00th=[ 54], 10.00th=[ 70], 20.00th=[ 96], 00:24:08.124 | 30.00th=[ 108], 40.00th=[ 117], 50.00th=[ 128], 60.00th=[ 142], 00:24:08.124 | 70.00th=[ 155], 80.00th=[ 169], 90.00th=[ 188], 95.00th=[ 199], 00:24:08.124 | 99.00th=[ 222], 99.50th=[ 234], 99.90th=[ 259], 99.95th=[ 262], 00:24:08.124 | 99.99th=[ 262] 00:24:08.124 bw ( KiB/s): min=89600, max=178176, per=6.78%, avg=124891.55, stdev=24983.62, samples=20 00:24:08.124 iops : min= 350, max= 696, avg=487.85, stdev=97.60, samples=20 00:24:08.124 lat (msec) : 4=0.08%, 10=1.23%, 20=0.97%, 50=2.23%, 100=18.98% 00:24:08.124 lat (msec) : 250=76.16%, 500=0.34% 00:24:08.124 cpu : usr=0.24%, sys=1.58%, ctx=1155, majf=0, minf=4097 00:24:08.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:08.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:08.124 issued rwts: total=4942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.124 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:08.124 00:24:08.124 Run status group 0 (all jobs): 00:24:08.124 READ: bw=1799MiB/s (1886MB/s), 123MiB/s-219MiB/s (129MB/s-230MB/s), io=17.7GiB (19.0GB), run=10026-10095msec 00:24:08.124 00:24:08.124 Disk stats (read/write): 00:24:08.124 nvme0n1: ios=11851/0, merge=0/0, ticks=1238737/0, in_queue=1238737, util=97.25% 00:24:08.124 nvme10n1: ios=13050/0, merge=0/0, ticks=1245059/0, in_queue=1245059, util=97.47% 00:24:08.124 nvme1n1: ios=17432/0, merge=0/0, ticks=1242625/0, in_queue=1242625, util=97.72% 00:24:08.124 nvme2n1: ios=9719/0, merge=0/0, ticks=1235327/0, in_queue=1235327, util=97.87% 00:24:08.124 nvme3n1: ios=14564/0, merge=0/0, ticks=1243127/0, in_queue=1243127, util=97.95% 00:24:08.124 nvme4n1: ios=13747/0, merge=0/0, ticks=1243498/0, in_queue=1243498, util=98.26% 00:24:08.124 nvme5n1: ios=11930/0, merge=0/0, ticks=1243836/0, in_queue=1243836, util=98.43% 00:24:08.124 nvme6n1: ios=10658/0, merge=0/0, ticks=1241409/0, in_queue=1241409, util=98.53% 00:24:08.124 nvme7n1: ios=14096/0, merge=0/0, ticks=1238310/0, in_queue=1238310, util=98.93% 00:24:08.124 nvme8n1: ios=16258/0, merge=0/0, ticks=1244094/0, in_queue=1244094, util=99.08% 00:24:08.124 nvme9n1: ios=9693/0, merge=0/0, ticks=1240909/0, in_queue=1240909, util=99.21% 00:24:08.124 23:30:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:08.124 [global] 00:24:08.124 thread=1 00:24:08.124 invalidate=1 00:24:08.124 rw=randwrite 00:24:08.124 time_based=1 00:24:08.124 runtime=10 00:24:08.124 ioengine=libaio 00:24:08.124 direct=1 00:24:08.124 bs=262144 00:24:08.124 iodepth=64 00:24:08.124 norandommap=1 00:24:08.124 numjobs=1 00:24:08.124 00:24:08.124 [job0] 00:24:08.124 filename=/dev/nvme0n1 00:24:08.124 [job1] 00:24:08.124 filename=/dev/nvme10n1 00:24:08.124 [job2] 00:24:08.124 filename=/dev/nvme1n1 00:24:08.124 [job3] 00:24:08.124 filename=/dev/nvme2n1 00:24:08.124 [job4] 00:24:08.124 filename=/dev/nvme3n1 00:24:08.124 [job5] 00:24:08.124 filename=/dev/nvme4n1 00:24:08.124 [job6] 00:24:08.124 filename=/dev/nvme5n1 00:24:08.124 [job7] 00:24:08.124 filename=/dev/nvme6n1 00:24:08.124 [job8] 00:24:08.124 filename=/dev/nvme7n1 00:24:08.124 [job9] 00:24:08.124 filename=/dev/nvme8n1 00:24:08.124 [job10] 00:24:08.124 filename=/dev/nvme9n1 00:24:08.124 Could not set queue depth (nvme0n1) 00:24:08.124 Could not set queue depth (nvme10n1) 00:24:08.124 Could not set queue depth (nvme1n1) 00:24:08.124 Could not set queue depth (nvme2n1) 00:24:08.124 Could not set queue depth (nvme3n1) 00:24:08.124 Could not set queue depth (nvme4n1) 00:24:08.124 Could not set queue depth (nvme5n1) 00:24:08.124 Could not set queue depth (nvme6n1) 00:24:08.124 Could not set queue depth (nvme7n1) 00:24:08.124 Could not set queue depth (nvme8n1) 00:24:08.124 Could not set queue depth (nvme9n1) 00:24:08.124 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.124 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.124 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.124 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.124 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.124 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.124 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.124 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.124 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.124 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.124 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.124 fio-3.35 00:24:08.124 Starting 11 threads 00:24:18.102 00:24:18.102 job0: (groupid=0, jobs=1): err= 0: pid=1440595: Thu Jul 25 23:30:14 2024 00:24:18.102 write: IOPS=457, BW=114MiB/s (120MB/s)(1158MiB/10131msec); 0 zone resets 00:24:18.102 slat (usec): min=24, max=105302, avg=1724.97, stdev=4396.37 00:24:18.102 clat (msec): min=4, max=307, avg=138.13, stdev=58.65 00:24:18.102 lat (msec): min=4, max=307, avg=139.85, stdev=59.43 00:24:18.102 clat percentiles (msec): 00:24:18.102 | 1.00th=[ 18], 5.00th=[ 48], 10.00th=[ 71], 20.00th=[ 96], 00:24:18.102 | 30.00th=[ 109], 40.00th=[ 116], 50.00th=[ 128], 60.00th=[ 142], 00:24:18.102 | 70.00th=[ 167], 80.00th=[ 182], 90.00th=[ 224], 95.00th=[ 251], 00:24:18.102 | 99.00th=[ 288], 99.50th=[ 296], 99.90th=[ 305], 99.95th=[ 309], 00:24:18.102 | 99.99th=[ 309] 00:24:18.102 bw ( KiB/s): min=51200, max=203776, per=8.07%, avg=116959.45, stdev=36792.30, samples=20 00:24:18.102 iops : min= 200, max= 796, avg=456.85, stdev=143.75, samples=20 00:24:18.102 lat (msec) : 10=0.37%, 20=0.91%, 50=4.04%, 100=16.52%, 250=73.16% 00:24:18.102 lat (msec) : 500=5.01% 00:24:18.102 cpu : usr=1.43%, sys=1.61%, ctx=2158, majf=0, minf=1 00:24:18.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:24:18.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:18.102 issued rwts: total=0,4632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:18.102 job1: (groupid=0, jobs=1): err= 0: pid=1440607: Thu Jul 25 23:30:14 2024 00:24:18.102 write: IOPS=557, BW=139MiB/s (146MB/s)(1412MiB/10131msec); 0 zone resets 00:24:18.102 slat (usec): min=17, max=150757, avg=1394.14, stdev=4340.42 00:24:18.102 clat (usec): min=1083, max=375669, avg=113284.56, stdev=69772.17 00:24:18.102 lat (usec): min=1118, max=375740, avg=114678.70, stdev=70521.22 00:24:18.102 clat percentiles (msec): 00:24:18.102 | 1.00th=[ 6], 5.00th=[ 27], 10.00th=[ 39], 20.00th=[ 42], 00:24:18.102 | 30.00th=[ 64], 40.00th=[ 93], 50.00th=[ 110], 60.00th=[ 122], 00:24:18.102 | 70.00th=[ 140], 80.00th=[ 169], 90.00th=[ 209], 95.00th=[ 243], 00:24:18.102 | 99.00th=[ 342], 99.50th=[ 351], 99.90th=[ 376], 99.95th=[ 376], 00:24:18.102 | 99.99th=[ 376] 00:24:18.102 bw ( KiB/s): min=71168, max=358912, per=9.87%, avg=142986.25, stdev=66628.75, samples=20 00:24:18.102 iops : min= 278, max= 1402, avg=558.50, stdev=260.26, samples=20 00:24:18.102 lat (msec) : 2=0.11%, 4=0.46%, 10=1.20%, 20=1.96%, 50=20.06% 00:24:18.102 lat (msec) : 100=19.37%, 250=52.58%, 500=4.27% 00:24:18.102 cpu : usr=1.80%, sys=1.68%, ctx=2593, majf=0, minf=1 00:24:18.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:18.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:18.103 issued rwts: total=0,5649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.103 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:18.103 job2: (groupid=0, jobs=1): err= 0: pid=1440614: Thu Jul 25 23:30:14 2024 00:24:18.103 write: IOPS=470, BW=118MiB/s (123MB/s)(1191MiB/10131msec); 0 zone resets 00:24:18.103 slat (usec): min=16, max=94392, avg=1554.50, stdev=4235.71 00:24:18.103 clat (msec): min=3, max=401, avg=134.54, stdev=70.54 00:24:18.103 lat (msec): min=3, max=404, avg=136.09, stdev=71.25 00:24:18.103 clat percentiles (msec): 00:24:18.103 | 1.00th=[ 9], 5.00th=[ 22], 10.00th=[ 40], 20.00th=[ 80], 00:24:18.103 | 30.00th=[ 90], 40.00th=[ 120], 50.00th=[ 132], 60.00th=[ 150], 00:24:18.103 | 70.00th=[ 171], 80.00th=[ 186], 90.00th=[ 226], 95.00th=[ 259], 00:24:18.103 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 393], 99.95th=[ 397], 00:24:18.103 | 99.99th=[ 401] 00:24:18.103 bw ( KiB/s): min=69120, max=172544, per=8.30%, avg=120280.70, stdev=31703.84, samples=20 00:24:18.103 iops : min= 270, max= 674, avg=469.80, stdev=123.85, samples=20 00:24:18.103 lat (msec) : 4=0.10%, 10=1.30%, 20=3.38%, 50=8.19%, 100=22.18% 00:24:18.103 lat (msec) : 250=58.59%, 500=6.26% 00:24:18.103 cpu : usr=1.48%, sys=1.58%, ctx=2474, majf=0, minf=1 00:24:18.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:18.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:18.103 issued rwts: total=0,4762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.103 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:18.103 job3: (groupid=0, jobs=1): err= 0: pid=1440615: Thu Jul 25 23:30:14 2024 00:24:18.103 write: IOPS=565, BW=141MiB/s (148MB/s)(1429MiB/10107msec); 0 zone resets 00:24:18.103 slat (usec): min=18, max=182478, avg=1318.48, stdev=4599.97 00:24:18.103 clat (msec): min=2, max=325, avg=111.79, stdev=70.22 00:24:18.103 lat (msec): min=2, max=325, avg=113.11, stdev=71.09 00:24:18.103 clat percentiles (msec): 00:24:18.103 | 1.00th=[ 11], 5.00th=[ 25], 10.00th=[ 39], 20.00th=[ 46], 00:24:18.103 | 30.00th=[ 54], 40.00th=[ 67], 50.00th=[ 97], 60.00th=[ 126], 00:24:18.103 | 70.00th=[ 159], 80.00th=[ 180], 90.00th=[ 209], 95.00th=[ 236], 00:24:18.103 | 99.00th=[ 284], 99.50th=[ 305], 99.90th=[ 326], 99.95th=[ 326], 00:24:18.103 | 99.99th=[ 326] 00:24:18.103 bw ( KiB/s): min=57344, max=240670, per=9.99%, avg=144641.50, stdev=57317.16, samples=20 00:24:18.103 iops : min= 224, max= 940, avg=565.00, stdev=223.88, samples=20 00:24:18.103 lat (msec) : 4=0.14%, 10=0.67%, 20=2.59%, 50=23.68%, 100=23.94% 00:24:18.103 lat (msec) : 250=45.89%, 500=3.10% 00:24:18.103 cpu : usr=1.76%, sys=2.06%, ctx=2765, majf=0, minf=1 00:24:18.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:18.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:18.103 issued rwts: total=0,5714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.103 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:18.103 job4: (groupid=0, jobs=1): err= 0: pid=1440616: Thu Jul 25 23:30:14 2024 00:24:18.103 write: IOPS=528, BW=132MiB/s (139MB/s)(1338MiB/10126msec); 0 zone resets 00:24:18.103 slat (usec): min=22, max=54207, avg=1095.54, stdev=3411.81 00:24:18.103 clat (usec): min=1138, max=290370, avg=119914.16, stdev=64642.77 00:24:18.103 lat (usec): min=1216, max=290634, avg=121009.69, stdev=65384.32 00:24:18.103 clat percentiles (msec): 00:24:18.103 | 1.00th=[ 3], 5.00th=[ 21], 10.00th=[ 33], 20.00th=[ 53], 00:24:18.103 | 30.00th=[ 78], 40.00th=[ 103], 50.00th=[ 126], 60.00th=[ 138], 00:24:18.103 | 70.00th=[ 157], 80.00th=[ 180], 90.00th=[ 205], 95.00th=[ 224], 00:24:18.103 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 292], 99.95th=[ 292], 00:24:18.103 | 99.99th=[ 292] 00:24:18.103 bw ( KiB/s): min=73728, max=247808, per=9.34%, avg=135361.10, stdev=37889.87, samples=20 00:24:18.103 iops : min= 288, max= 968, avg=528.75, stdev=148.01, samples=20 00:24:18.103 lat (msec) : 2=0.36%, 4=0.99%, 10=0.95%, 20=2.58%, 50=14.33% 00:24:18.103 lat (msec) : 100=20.00%, 250=58.34%, 500=2.45% 00:24:18.103 cpu : usr=1.83%, sys=1.73%, ctx=3516, majf=0, minf=1 00:24:18.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:18.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:18.103 issued rwts: total=0,5351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.103 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:18.103 job5: (groupid=0, jobs=1): err= 0: pid=1440617: Thu Jul 25 23:30:14 2024 00:24:18.103 write: IOPS=465, BW=116MiB/s (122MB/s)(1184MiB/10159msec); 0 zone resets 00:24:18.103 slat (usec): min=15, max=69077, avg=1674.33, stdev=4418.36 00:24:18.103 clat (usec): min=1591, max=371911, avg=135604.77, stdev=73123.30 00:24:18.103 lat (usec): min=1689, max=371976, avg=137279.11, stdev=74030.96 00:24:18.103 clat percentiles (msec): 00:24:18.103 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 32], 20.00th=[ 59], 00:24:18.103 | 30.00th=[ 95], 40.00th=[ 117], 50.00th=[ 136], 60.00th=[ 155], 00:24:18.103 | 70.00th=[ 182], 80.00th=[ 201], 90.00th=[ 234], 95.00th=[ 255], 00:24:18.103 | 99.00th=[ 292], 99.50th=[ 317], 99.90th=[ 351], 99.95th=[ 351], 00:24:18.103 | 99.99th=[ 372] 00:24:18.103 bw ( KiB/s): min=65536, max=204288, per=8.25%, avg=119540.15, stdev=39806.50, samples=20 00:24:18.103 iops : min= 256, max= 798, avg=466.95, stdev=155.49, samples=20 00:24:18.103 lat (msec) : 2=0.08%, 4=0.42%, 10=1.71%, 20=4.22%, 50=9.00% 00:24:18.103 lat (msec) : 100=17.00%, 250=61.60%, 500=5.96% 00:24:18.103 cpu : usr=1.41%, sys=1.57%, ctx=2402, majf=0, minf=1 00:24:18.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:18.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:18.103 issued rwts: total=0,4734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.103 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:18.103 job6: (groupid=0, jobs=1): err= 0: pid=1440618: Thu Jul 25 23:30:14 2024 00:24:18.103 write: IOPS=447, BW=112MiB/s (117MB/s)(1136MiB/10157msec); 0 zone resets 00:24:18.103 slat (usec): min=14, max=45541, avg=1668.04, stdev=4243.35 00:24:18.103 clat (usec): min=1777, max=356565, avg=141356.26, stdev=71380.50 00:24:18.103 lat (usec): min=1816, max=356626, avg=143024.30, stdev=72425.51 00:24:18.103 clat percentiles (msec): 00:24:18.103 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 41], 20.00th=[ 68], 00:24:18.103 | 30.00th=[ 99], 40.00th=[ 128], 50.00th=[ 144], 60.00th=[ 171], 00:24:18.103 | 70.00th=[ 186], 80.00th=[ 205], 90.00th=[ 232], 95.00th=[ 253], 00:24:18.103 | 99.00th=[ 288], 99.50th=[ 296], 99.90th=[ 342], 99.95th=[ 342], 00:24:18.103 | 99.99th=[ 359] 00:24:18.103 bw ( KiB/s): min=63488, max=202752, per=7.92%, avg=114679.65, stdev=41425.55, samples=20 00:24:18.103 iops : min= 248, max= 792, avg=447.95, stdev=161.83, samples=20 00:24:18.103 lat (msec) : 2=0.09%, 4=0.29%, 10=1.32%, 20=2.66%, 50=8.52% 00:24:18.103 lat (msec) : 100=17.98%, 250=63.55%, 500=5.59% 00:24:18.103 cpu : usr=1.23%, sys=1.66%, ctx=2461, majf=0, minf=1 00:24:18.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:18.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:18.103 issued rwts: total=0,4543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.103 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:18.103 job7: (groupid=0, jobs=1): err= 0: pid=1440619: Thu Jul 25 23:30:14 2024 00:24:18.103 write: IOPS=558, BW=140MiB/s (146MB/s)(1414MiB/10131msec); 0 zone resets 00:24:18.103 slat (usec): min=16, max=93181, avg=912.13, stdev=3508.97 00:24:18.103 clat (usec): min=1345, max=320275, avg=113672.64, stdev=73614.10 00:24:18.103 lat (usec): min=1383, max=327586, avg=114584.77, stdev=74347.75 00:24:18.103 clat percentiles (msec): 00:24:18.103 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 23], 20.00th=[ 46], 00:24:18.103 | 30.00th=[ 67], 40.00th=[ 83], 50.00th=[ 100], 60.00th=[ 117], 00:24:18.103 | 70.00th=[ 153], 80.00th=[ 184], 90.00th=[ 226], 95.00th=[ 249], 00:24:18.103 | 99.00th=[ 288], 99.50th=[ 292], 99.90th=[ 309], 99.95th=[ 313], 00:24:18.103 | 99.99th=[ 321] 00:24:18.103 bw ( KiB/s): min=67449, max=294400, per=9.88%, avg=143174.05, stdev=55227.90, samples=20 00:24:18.103 iops : min= 263, max= 1150, avg=559.25, stdev=215.77, samples=20 00:24:18.103 lat (msec) : 2=0.09%, 4=0.87%, 10=1.73%, 20=6.17%, 50=13.28% 00:24:18.103 lat (msec) : 100=28.48%, 250=44.77%, 500=4.61% 00:24:18.103 cpu : usr=1.66%, sys=2.08%, ctx=3984, majf=0, minf=1 00:24:18.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:18.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:18.104 issued rwts: total=0,5656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.104 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:18.104 job8: (groupid=0, jobs=1): err= 0: pid=1440624: Thu Jul 25 23:30:14 2024 00:24:18.104 write: IOPS=500, BW=125MiB/s (131MB/s)(1272MiB/10158msec); 0 zone resets 00:24:18.104 slat (usec): min=15, max=139084, avg=1193.65, stdev=4755.49 00:24:18.104 clat (msec): min=2, max=418, avg=126.52, stdev=78.45 00:24:18.104 lat (msec): min=2, max=418, avg=127.72, stdev=79.38 00:24:18.104 clat percentiles (msec): 00:24:18.104 | 1.00th=[ 10], 5.00th=[ 20], 10.00th=[ 31], 20.00th=[ 51], 00:24:18.104 | 30.00th=[ 75], 40.00th=[ 89], 50.00th=[ 110], 60.00th=[ 142], 00:24:18.104 | 70.00th=[ 174], 80.00th=[ 201], 90.00th=[ 241], 95.00th=[ 266], 00:24:18.104 | 99.00th=[ 309], 99.50th=[ 326], 99.90th=[ 359], 99.95th=[ 418], 00:24:18.104 | 99.99th=[ 418] 00:24:18.104 bw ( KiB/s): min=56320, max=216576, per=8.88%, avg=128630.45, stdev=47028.75, samples=20 00:24:18.104 iops : min= 220, max= 846, avg=502.45, stdev=183.72, samples=20 00:24:18.104 lat (msec) : 4=0.06%, 10=1.16%, 20=4.21%, 50=14.41%, 100=24.82% 00:24:18.104 lat (msec) : 250=48.21%, 500=7.13% 00:24:18.104 cpu : usr=1.59%, sys=1.73%, ctx=3380, majf=0, minf=1 00:24:18.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:18.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:18.104 issued rwts: total=0,5088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.104 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:18.104 job9: (groupid=0, jobs=1): err= 0: pid=1440625: Thu Jul 25 23:30:14 2024 00:24:18.104 write: IOPS=531, BW=133MiB/s (139MB/s)(1347MiB/10132msec); 0 zone resets 00:24:18.104 slat (usec): min=19, max=101406, avg=1545.92, stdev=3989.10 00:24:18.104 clat (msec): min=2, max=353, avg=118.77, stdev=61.18 00:24:18.104 lat (msec): min=2, max=353, avg=120.32, stdev=61.86 00:24:18.104 clat percentiles (msec): 00:24:18.104 | 1.00th=[ 7], 5.00th=[ 31], 10.00th=[ 55], 20.00th=[ 79], 00:24:18.104 | 30.00th=[ 84], 40.00th=[ 90], 50.00th=[ 101], 60.00th=[ 117], 00:24:18.104 | 70.00th=[ 138], 80.00th=[ 174], 90.00th=[ 209], 95.00th=[ 234], 00:24:18.104 | 99.00th=[ 279], 99.50th=[ 309], 99.90th=[ 347], 99.95th=[ 347], 00:24:18.104 | 99.99th=[ 355] 00:24:18.104 bw ( KiB/s): min=59392, max=205312, per=9.41%, avg=136261.60, stdev=45879.79, samples=20 00:24:18.104 iops : min= 232, max= 802, avg=532.25, stdev=179.25, samples=20 00:24:18.104 lat (msec) : 4=0.20%, 10=1.28%, 20=1.17%, 50=6.29%, 100=40.94% 00:24:18.104 lat (msec) : 250=46.83%, 500=3.29% 00:24:18.104 cpu : usr=1.76%, sys=1.64%, ctx=2292, majf=0, minf=1 00:24:18.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:18.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:18.104 issued rwts: total=0,5386,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.104 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:18.104 job10: (groupid=0, jobs=1): err= 0: pid=1440626: Thu Jul 25 23:30:14 2024 00:24:18.104 write: IOPS=589, BW=147MiB/s (154MB/s)(1492MiB/10132msec); 0 zone resets 00:24:18.104 slat (usec): min=18, max=52747, avg=1228.81, stdev=3218.20 00:24:18.104 clat (usec): min=935, max=317878, avg=107229.21, stdev=63342.78 00:24:18.104 lat (usec): min=980, max=317909, avg=108458.03, stdev=64026.62 00:24:18.104 clat percentiles (msec): 00:24:18.104 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 39], 20.00th=[ 44], 00:24:18.104 | 30.00th=[ 59], 40.00th=[ 83], 50.00th=[ 104], 60.00th=[ 126], 00:24:18.104 | 70.00th=[ 140], 80.00th=[ 165], 90.00th=[ 190], 95.00th=[ 218], 00:24:18.104 | 99.00th=[ 288], 99.50th=[ 305], 99.90th=[ 317], 99.95th=[ 317], 00:24:18.104 | 99.99th=[ 317] 00:24:18.104 bw ( KiB/s): min=61440, max=297472, per=10.43%, avg=151154.60, stdev=58668.00, samples=20 00:24:18.104 iops : min= 240, max= 1162, avg=590.40, stdev=229.19, samples=20 00:24:18.104 lat (usec) : 1000=0.05% 00:24:18.104 lat (msec) : 2=0.25%, 4=0.70%, 10=2.06%, 20=2.75%, 50=18.77% 00:24:18.104 lat (msec) : 100=24.28%, 250=49.28%, 500=1.86% 00:24:18.104 cpu : usr=1.87%, sys=1.91%, ctx=2973, majf=0, minf=1 00:24:18.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:18.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:18.104 issued rwts: total=0,5968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.104 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:18.104 00:24:18.104 Run status group 0 (all jobs): 00:24:18.104 WRITE: bw=1415MiB/s (1483MB/s), 112MiB/s-147MiB/s (117MB/s-154MB/s), io=14.0GiB (15.1GB), run=10107-10159msec 00:24:18.104 00:24:18.104 Disk stats (read/write): 00:24:18.104 nvme0n1: ios=41/9066, merge=0/0, ticks=758/1213142, in_queue=1213900, util=99.95% 00:24:18.104 nvme10n1: ios=51/11100, merge=0/0, ticks=3236/1180375, in_queue=1183611, util=99.94% 00:24:18.104 nvme1n1: ios=0/9321, merge=0/0, ticks=0/1216748, in_queue=1216748, util=97.49% 00:24:18.104 nvme2n1: ios=40/11226, merge=0/0, ticks=2988/1179190, in_queue=1182178, util=100.00% 00:24:18.104 nvme3n1: ios=48/10437, merge=0/0, ticks=488/1222670, in_queue=1223158, util=99.95% 00:24:18.104 nvme4n1: ios=0/9290, merge=0/0, ticks=0/1209630, in_queue=1209630, util=98.08% 00:24:18.104 nvme5n1: ios=0/8913, merge=0/0, ticks=0/1212796, in_queue=1212796, util=98.24% 00:24:18.104 nvme6n1: ios=0/11129, merge=0/0, ticks=0/1225845, in_queue=1225845, util=98.35% 00:24:18.104 nvme7n1: ios=0/10001, merge=0/0, ticks=0/1219549, in_queue=1219549, util=98.79% 00:24:18.104 nvme8n1: ios=41/10586, merge=0/0, ticks=1299/1210674, in_queue=1211973, util=100.00% 00:24:18.104 nvme9n1: ios=50/11734, merge=0/0, ticks=902/1216483, in_queue=1217385, util=99.99% 00:24:18.104 23:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:18.104 23:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:18.104 23:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.104 23:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:18.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:18.104 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.104 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.105 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.105 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:18.363 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:18.363 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:18.363 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:18.364 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:18.364 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:24:18.364 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:18.364 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:24:18.364 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:18.364 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:18.364 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.364 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.364 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.364 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.364 23:30:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:18.364 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:18.364 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:18.364 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:18.364 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:18.364 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:24:18.364 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:18.364 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:24:18.364 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:18.364 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:18.364 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.364 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.364 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.364 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.364 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:18.624 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:18.624 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:18.624 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:18.624 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:18.624 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:24:18.624 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:18.624 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:24:18.624 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:18.624 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:18.624 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.624 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.624 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.624 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.624 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:18.884 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:18.884 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:18.884 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:18.884 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:18.885 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:24:18.885 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:18.885 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:24:18.885 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:18.885 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:18.885 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.885 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.885 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.885 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.885 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:19.144 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:19.144 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:19.144 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:19.144 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:19.144 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:24:19.144 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:19.144 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:24:19.144 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:19.144 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:19.144 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.144 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.144 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.144 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.145 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:19.145 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:19.145 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:19.145 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:19.145 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:19.145 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:24:19.145 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:19.145 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:24:19.145 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:19.145 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:19.145 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.145 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.145 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.145 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.145 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:19.403 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:19.403 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:19.403 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:19.403 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:19.403 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:24:19.404 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:19.404 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:24:19.404 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:19.404 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:19.404 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.404 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.404 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.404 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.404 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:19.404 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:19.404 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:19.404 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:19.404 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:19.404 23:30:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:19.404 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:19.404 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:19.404 rmmod nvme_tcp 00:24:19.404 rmmod nvme_fabrics 00:24:19.663 rmmod nvme_keyring 00:24:19.663 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:19.663 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:24:19.663 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:24:19.663 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1435062 ']' 00:24:19.663 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1435062 00:24:19.663 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 1435062 ']' 00:24:19.663 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 1435062 00:24:19.663 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:24:19.663 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:19.663 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1435062 00:24:19.663 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:19.663 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:19.663 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1435062' 00:24:19.663 killing process with pid 1435062 00:24:19.663 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 1435062 00:24:19.663 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 1435062 00:24:20.234 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:20.234 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:20.234 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:20.234 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:20.234 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:20.234 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.234 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.234 23:30:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.139 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:22.139 00:24:22.139 real 0m59.779s 00:24:22.139 user 3m18.075s 00:24:22.139 sys 0m24.660s 00:24:22.139 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:22.139 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.139 ************************************ 00:24:22.139 END TEST nvmf_multiconnection 00:24:22.139 ************************************ 00:24:22.139 23:30:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:22.139 23:30:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:22.139 23:30:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:22.139 23:30:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:22.139 ************************************ 00:24:22.139 START TEST nvmf_initiator_timeout 00:24:22.139 ************************************ 00:24:22.139 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:22.399 * Looking for test storage... 00:24:22.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.399 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:24:22.400 23:30:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.304 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:24.305 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:24.305 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:24.305 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:24.305 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:24.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:24:24.305 00:24:24.305 --- 10.0.0.2 ping statistics --- 00:24:24.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.305 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:24:24.305 00:24:24.305 --- 10.0.0.1 ping statistics --- 00:24:24.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.305 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:24.305 23:30:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:24.305 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:24.305 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:24.306 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:24.306 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.306 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1444433 00:24:24.306 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:24.306 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1444433 00:24:24.306 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 1444433 ']' 00:24:24.306 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.306 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:24.306 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.306 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:24.306 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.565 [2024-07-25 23:30:22.060462] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:24:24.565 [2024-07-25 23:30:22.060544] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.565 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.565 [2024-07-25 23:30:22.102346] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:24.565 [2024-07-25 23:30:22.132948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:24.565 [2024-07-25 23:30:22.230574] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.565 [2024-07-25 23:30:22.230647] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.565 [2024-07-25 23:30:22.230664] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.565 [2024-07-25 23:30:22.230678] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.565 [2024-07-25 23:30:22.230690] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.565 [2024-07-25 23:30:22.234085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.565 [2024-07-25 23:30:22.234143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.565 [2024-07-25 23:30:22.234221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:24.565 [2024-07-25 23:30:22.234225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.823 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:24.823 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:24:24.823 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:24.823 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:24.823 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.823 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.823 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:24.823 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:24.823 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.823 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.823 Malloc0 00:24:24.823 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.823 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:24.823 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.823 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.823 Delay0 00:24:24.823 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.824 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:24.824 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.824 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.824 [2024-07-25 23:30:22.405858] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.824 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.824 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:24.824 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.824 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.824 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.824 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:24.824 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.824 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.824 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.824 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.824 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.824 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:24.824 [2024-07-25 23:30:22.434139] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.824 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.824 23:30:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:25.758 23:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:25.758 23:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:24:25.758 23:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:25.758 23:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:25.758 23:30:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:24:27.659 23:30:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:27.659 23:30:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:27.659 23:30:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:27.659 23:30:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:27.659 23:30:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:27.659 23:30:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:24:27.659 23:30:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1444743 00:24:27.659 23:30:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:27.659 23:30:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:27.659 [global] 00:24:27.659 thread=1 00:24:27.659 invalidate=1 00:24:27.659 rw=write 00:24:27.659 time_based=1 00:24:27.659 runtime=60 00:24:27.659 ioengine=libaio 00:24:27.659 direct=1 00:24:27.659 bs=4096 00:24:27.659 iodepth=1 00:24:27.659 norandommap=0 00:24:27.659 numjobs=1 00:24:27.659 00:24:27.659 verify_dump=1 00:24:27.659 verify_backlog=512 00:24:27.659 verify_state_save=0 00:24:27.659 do_verify=1 00:24:27.659 verify=crc32c-intel 00:24:27.659 [job0] 00:24:27.659 filename=/dev/nvme0n1 00:24:27.659 Could not set queue depth (nvme0n1) 00:24:27.659 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:27.659 fio-3.35 00:24:27.659 Starting 1 thread 00:24:30.945 23:30:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:30.945 23:30:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.945 23:30:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.945 true 00:24:30.945 23:30:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.945 23:30:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:30.945 23:30:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.945 23:30:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.945 true 00:24:30.945 23:30:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.945 23:30:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:30.945 23:30:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.945 23:30:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.945 true 00:24:30.945 23:30:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.945 23:30:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:30.945 23:30:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.945 23:30:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:30.945 true 00:24:30.945 23:30:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.945 23:30:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:33.481 true 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:33.481 true 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:33.481 true 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:33.481 true 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:33.481 23:30:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1444743 00:25:29.716 00:25:29.716 job0: (groupid=0, jobs=1): err= 0: pid=1444847: Thu Jul 25 23:31:25 2024 00:25:29.716 read: IOPS=119, BW=478KiB/s (489kB/s)(28.0MiB/60001msec) 00:25:29.716 slat (usec): min=4, max=11785, avg=16.49, stdev=139.31 00:25:29.716 clat (usec): min=235, max=40941k, avg=8097.18, stdev=483627.91 00:25:29.716 lat (usec): min=262, max=40941k, avg=8113.67, stdev=483627.99 00:25:29.716 clat percentiles (usec): 00:25:29.716 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 302], 00:25:29.716 | 20.00th=[ 314], 30.00th=[ 322], 40.00th=[ 330], 00:25:29.716 | 50.00th=[ 338], 60.00th=[ 351], 70.00th=[ 371], 00:25:29.716 | 80.00th=[ 404], 90.00th=[ 478], 95.00th=[ 578], 00:25:29.716 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:25:29.716 | 99.95th=[ 42206], 99.99th=[17112761] 00:25:29.716 write: IOPS=120, BW=483KiB/s (495kB/s)(28.3MiB/60001msec); 0 zone resets 00:25:29.716 slat (usec): min=6, max=29295, avg=18.48, stdev=344.07 00:25:29.716 clat (usec): min=179, max=439, avg=227.13, stdev=31.43 00:25:29.716 lat (usec): min=187, max=29574, avg=245.61, stdev=346.44 00:25:29.716 clat percentiles (usec): 00:25:29.716 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 204], 00:25:29.716 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 229], 00:25:29.716 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 281], 00:25:29.716 | 99.00th=[ 355], 99.50th=[ 388], 99.90th=[ 424], 99.95th=[ 433], 00:25:29.716 | 99.99th=[ 441] 00:25:29.716 bw ( KiB/s): min= 4040, max= 8192, per=100.00%, avg=5734.40, stdev=1646.90, samples=10 00:25:29.716 iops : min= 1010, max= 2048, avg=1433.60, stdev=411.72, samples=10 00:25:29.716 lat (usec) : 250=42.92%, 500=53.42%, 750=1.21% 00:25:29.716 lat (msec) : 2=0.01%, 50=2.44%, >=2000=0.01% 00:25:29.716 cpu : usr=0.23%, sys=0.42%, ctx=14418, majf=0, minf=2 00:25:29.716 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:29.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.716 issued rwts: total=7168,7246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.716 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:29.716 00:25:29.716 Run status group 0 (all jobs): 00:25:29.716 READ: bw=478KiB/s (489kB/s), 478KiB/s-478KiB/s (489kB/s-489kB/s), io=28.0MiB (29.4MB), run=60001-60001msec 00:25:29.716 WRITE: bw=483KiB/s (495kB/s), 483KiB/s-483KiB/s (495kB/s-495kB/s), io=28.3MiB (29.7MB), run=60001-60001msec 00:25:29.716 00:25:29.716 Disk stats (read/write): 00:25:29.716 nvme0n1: ios=7218/7168, merge=0/0, ticks=18146/1577, in_queue=19723, util=99.64% 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:29.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:29.716 nvmf hotplug test: fio successful as expected 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:29.716 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:29.716 rmmod nvme_tcp 00:25:29.716 rmmod nvme_fabrics 00:25:29.717 rmmod nvme_keyring 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1444433 ']' 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1444433 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 1444433 ']' 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 1444433 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1444433 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1444433' 00:25:29.717 killing process with pid 1444433 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 1444433 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 1444433 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.717 23:31:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.285 23:31:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:30.285 00:25:30.285 real 1m8.111s 00:25:30.285 user 4m10.550s 00:25:30.285 sys 0m6.651s 00:25:30.285 23:31:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:30.286 23:31:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:30.286 ************************************ 00:25:30.286 END TEST nvmf_initiator_timeout 00:25:30.286 ************************************ 00:25:30.286 23:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:25:30.286 23:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:25:30.286 23:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:25:30.286 23:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:25:30.286 23:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:32.190 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:32.190 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.190 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:32.191 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:32.191 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:32.191 ************************************ 00:25:32.191 START TEST nvmf_perf_adq 00:25:32.191 ************************************ 00:25:32.191 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:32.451 * Looking for test storage... 00:25:32.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:25:32.451 23:31:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:34.359 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:34.359 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:34.359 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.359 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.360 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.360 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.360 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.360 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.360 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:34.360 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:34.360 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.360 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:34.360 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.360 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:34.360 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:34.360 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:25:34.360 23:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:25:34.930 23:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:25:36.832 23:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:25:42.104 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:42.105 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:42.105 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:42.105 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:42.105 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:42.105 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:42.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:25:42.106 00:25:42.106 --- 10.0.0.2 ping statistics --- 00:25:42.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.106 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:42.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:25:42.106 00:25:42.106 --- 10.0.0.1 ping statistics --- 00:25:42.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.106 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1456320 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1456320 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1456320 ']' 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:42.106 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.106 [2024-07-25 23:31:39.639806] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:25:42.106 [2024-07-25 23:31:39.639893] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.106 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.106 [2024-07-25 23:31:39.676363] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:42.106 [2024-07-25 23:31:39.702927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:42.106 [2024-07-25 23:31:39.790968] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.106 [2024-07-25 23:31:39.791016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.106 [2024-07-25 23:31:39.791040] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.106 [2024-07-25 23:31:39.791051] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.106 [2024-07-25 23:31:39.791082] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.106 [2024-07-25 23:31:39.791323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.106 [2024-07-25 23:31:39.791351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.106 [2024-07-25 23:31:39.791413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:42.106 [2024-07-25 23:31:39.791416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.367 23:31:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.367 [2024-07-25 23:31:40.037911] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.367 Malloc1 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.367 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:42.367 [2024-07-25 23:31:40.091315] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.627 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.628 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1456462 00:25:42.628 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:25:42.628 23:31:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:42.628 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.531 23:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:25:44.531 23:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.531 23:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:44.531 23:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.531 23:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:25:44.531 "tick_rate": 2700000000, 00:25:44.531 "poll_groups": [ 00:25:44.531 { 00:25:44.531 "name": "nvmf_tgt_poll_group_000", 00:25:44.531 "admin_qpairs": 1, 00:25:44.531 "io_qpairs": 1, 00:25:44.531 "current_admin_qpairs": 1, 00:25:44.531 "current_io_qpairs": 1, 00:25:44.531 "pending_bdev_io": 0, 00:25:44.531 "completed_nvme_io": 19105, 00:25:44.531 "transports": [ 00:25:44.531 { 00:25:44.531 "trtype": "TCP" 00:25:44.531 } 00:25:44.531 ] 00:25:44.531 }, 00:25:44.531 { 00:25:44.531 "name": "nvmf_tgt_poll_group_001", 00:25:44.531 "admin_qpairs": 0, 00:25:44.531 "io_qpairs": 1, 00:25:44.531 "current_admin_qpairs": 0, 00:25:44.531 "current_io_qpairs": 1, 00:25:44.531 "pending_bdev_io": 0, 00:25:44.531 "completed_nvme_io": 19910, 00:25:44.531 "transports": [ 00:25:44.531 { 00:25:44.531 "trtype": "TCP" 00:25:44.531 } 00:25:44.531 ] 00:25:44.531 }, 00:25:44.531 { 00:25:44.531 "name": "nvmf_tgt_poll_group_002", 00:25:44.531 "admin_qpairs": 0, 00:25:44.531 "io_qpairs": 1, 00:25:44.531 "current_admin_qpairs": 0, 00:25:44.531 "current_io_qpairs": 1, 00:25:44.531 "pending_bdev_io": 0, 00:25:44.531 "completed_nvme_io": 20865, 00:25:44.531 "transports": [ 00:25:44.531 { 00:25:44.531 "trtype": "TCP" 00:25:44.531 } 00:25:44.531 ] 00:25:44.531 }, 00:25:44.531 { 00:25:44.531 "name": "nvmf_tgt_poll_group_003", 00:25:44.531 "admin_qpairs": 0, 00:25:44.531 "io_qpairs": 1, 00:25:44.531 "current_admin_qpairs": 0, 00:25:44.531 "current_io_qpairs": 1, 00:25:44.531 "pending_bdev_io": 0, 00:25:44.531 "completed_nvme_io": 20498, 00:25:44.531 "transports": [ 00:25:44.531 { 00:25:44.531 "trtype": "TCP" 00:25:44.531 } 00:25:44.531 ] 00:25:44.531 } 00:25:44.531 ] 00:25:44.531 }' 00:25:44.531 23:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:44.531 23:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:25:44.531 23:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:25:44.531 23:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:25:44.531 23:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1456462 00:25:52.638 Initializing NVMe Controllers 00:25:52.638 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:52.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:52.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:52.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:52.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:52.638 Initialization complete. Launching workers. 00:25:52.638 ======================================================== 00:25:52.638 Latency(us) 00:25:52.638 Device Information : IOPS MiB/s Average min max 00:25:52.638 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10804.49 42.21 5925.18 2527.75 7566.45 00:25:52.638 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10537.40 41.16 6073.89 2833.98 9017.80 00:25:52.638 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10996.88 42.96 5825.79 3192.10 43651.24 00:25:52.638 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10054.01 39.27 6366.80 3048.66 9715.74 00:25:52.638 ======================================================== 00:25:52.638 Total : 42392.78 165.60 6041.10 2527.75 43651.24 00:25:52.638 00:25:52.638 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:25:52.638 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:52.638 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:25:52.638 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:52.638 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:25:52.638 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:52.638 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:52.638 rmmod nvme_tcp 00:25:52.638 rmmod nvme_fabrics 00:25:52.638 rmmod nvme_keyring 00:25:52.638 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:52.638 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:25:52.638 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:25:52.638 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1456320 ']' 00:25:52.638 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1456320 00:25:52.638 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1456320 ']' 00:25:52.638 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1456320 00:25:52.638 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:25:52.638 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:52.638 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1456320 00:25:52.897 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:52.897 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:52.897 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1456320' 00:25:52.897 killing process with pid 1456320 00:25:52.897 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1456320 00:25:52.897 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1456320 00:25:53.156 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:53.156 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:53.156 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:53.156 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:53.156 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:53.156 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.156 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.156 23:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.097 23:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:55.097 23:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:25:55.097 23:31:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:25:56.032 23:31:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:25:57.926 23:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:03.192 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:03.193 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:03.193 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:03.193 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:03.193 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:03.193 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:03.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:26:03.193 00:26:03.193 --- 10.0.0.2 ping statistics --- 00:26:03.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.194 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:03.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:26:03.194 00:26:03.194 --- 10.0.0.1 ping statistics --- 00:26:03.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.194 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:03.194 net.core.busy_poll = 1 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:03.194 net.core.busy_read = 1 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1459072 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1459072 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1459072 ']' 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:03.194 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.194 [2024-07-25 23:32:00.724810] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:03.194 [2024-07-25 23:32:00.724905] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.194 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.194 [2024-07-25 23:32:00.763908] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:03.194 [2024-07-25 23:32:00.790167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:03.194 [2024-07-25 23:32:00.880491] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.194 [2024-07-25 23:32:00.880545] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.194 [2024-07-25 23:32:00.880574] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.194 [2024-07-25 23:32:00.880585] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.194 [2024-07-25 23:32:00.880595] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.194 [2024-07-25 23:32:00.880726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.194 [2024-07-25 23:32:00.880793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:03.194 [2024-07-25 23:32:00.880860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:03.194 [2024-07-25 23:32:00.880862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.452 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:03.452 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:26:03.452 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:03.452 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:03.452 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.452 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.452 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:03.452 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:03.452 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:03.453 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.453 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.453 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.453 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:03.453 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:03.453 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.453 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.453 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.453 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:03.453 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.453 23:32:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.453 [2024-07-25 23:32:01.109517] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.453 Malloc1 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:03.453 [2024-07-25 23:32:01.162691] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1459109 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:03.453 23:32:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:03.710 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.605 23:32:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:05.605 23:32:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.605 23:32:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:05.605 23:32:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.605 23:32:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:05.605 "tick_rate": 2700000000, 00:26:05.605 "poll_groups": [ 00:26:05.605 { 00:26:05.605 "name": "nvmf_tgt_poll_group_000", 00:26:05.605 "admin_qpairs": 1, 00:26:05.605 "io_qpairs": 2, 00:26:05.605 "current_admin_qpairs": 1, 00:26:05.605 "current_io_qpairs": 2, 00:26:05.605 "pending_bdev_io": 0, 00:26:05.605 "completed_nvme_io": 26363, 00:26:05.605 "transports": [ 00:26:05.605 { 00:26:05.605 "trtype": "TCP" 00:26:05.605 } 00:26:05.605 ] 00:26:05.605 }, 00:26:05.605 { 00:26:05.605 "name": "nvmf_tgt_poll_group_001", 00:26:05.605 "admin_qpairs": 0, 00:26:05.605 "io_qpairs": 2, 00:26:05.605 "current_admin_qpairs": 0, 00:26:05.605 "current_io_qpairs": 2, 00:26:05.605 "pending_bdev_io": 0, 00:26:05.605 "completed_nvme_io": 26475, 00:26:05.605 "transports": [ 00:26:05.605 { 00:26:05.605 "trtype": "TCP" 00:26:05.605 } 00:26:05.605 ] 00:26:05.605 }, 00:26:05.605 { 00:26:05.605 "name": "nvmf_tgt_poll_group_002", 00:26:05.605 "admin_qpairs": 0, 00:26:05.605 "io_qpairs": 0, 00:26:05.605 "current_admin_qpairs": 0, 00:26:05.605 "current_io_qpairs": 0, 00:26:05.605 "pending_bdev_io": 0, 00:26:05.605 "completed_nvme_io": 0, 00:26:05.605 "transports": [ 00:26:05.605 { 00:26:05.605 "trtype": "TCP" 00:26:05.605 } 00:26:05.605 ] 00:26:05.605 }, 00:26:05.605 { 00:26:05.605 "name": "nvmf_tgt_poll_group_003", 00:26:05.605 "admin_qpairs": 0, 00:26:05.605 "io_qpairs": 0, 00:26:05.605 "current_admin_qpairs": 0, 00:26:05.605 "current_io_qpairs": 0, 00:26:05.605 "pending_bdev_io": 0, 00:26:05.605 "completed_nvme_io": 0, 00:26:05.605 "transports": [ 00:26:05.605 { 00:26:05.605 "trtype": "TCP" 00:26:05.605 } 00:26:05.605 ] 00:26:05.605 } 00:26:05.605 ] 00:26:05.605 }' 00:26:05.605 23:32:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:05.605 23:32:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:05.605 23:32:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:26:05.605 23:32:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:26:05.605 23:32:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1459109 00:26:13.707 Initializing NVMe Controllers 00:26:13.707 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:13.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:13.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:13.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:13.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:13.707 Initialization complete. Launching workers. 00:26:13.707 ======================================================== 00:26:13.707 Latency(us) 00:26:13.707 Device Information : IOPS MiB/s Average min max 00:26:13.707 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7306.50 28.54 8759.59 1727.69 52980.48 00:26:13.707 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6526.80 25.50 9823.75 1759.55 54757.90 00:26:13.707 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6521.30 25.47 9839.07 1753.64 52574.16 00:26:13.707 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7456.50 29.13 8583.09 1686.50 54241.29 00:26:13.707 ======================================================== 00:26:13.707 Total : 27811.09 108.64 9215.13 1686.50 54757.90 00:26:13.707 00:26:13.707 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:26:13.707 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:13.707 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:13.707 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:13.707 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:13.707 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:13.707 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:13.707 rmmod nvme_tcp 00:26:13.707 rmmod nvme_fabrics 00:26:13.707 rmmod nvme_keyring 00:26:13.707 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:13.707 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:13.707 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:13.707 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1459072 ']' 00:26:13.707 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1459072 00:26:13.707 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1459072 ']' 00:26:13.707 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1459072 00:26:13.707 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:13.707 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:13.707 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1459072 00:26:13.965 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:13.965 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:13.965 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1459072' 00:26:13.965 killing process with pid 1459072 00:26:13.966 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1459072 00:26:13.966 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1459072 00:26:13.966 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:13.966 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:13.966 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:13.966 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:13.966 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:13.966 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.966 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.966 23:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:17.251 00:26:17.251 real 0m44.850s 00:26:17.251 user 2m38.304s 00:26:17.251 sys 0m10.037s 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:17.251 ************************************ 00:26:17.251 END TEST nvmf_perf_adq 00:26:17.251 ************************************ 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:17.251 ************************************ 00:26:17.251 START TEST nvmf_shutdown 00:26:17.251 ************************************ 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:17.251 * Looking for test storage... 00:26:17.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:17.251 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:17.251 ************************************ 00:26:17.251 START TEST nvmf_shutdown_tc1 00:26:17.252 ************************************ 00:26:17.252 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:26:17.252 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:26:17.252 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:17.252 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:17.252 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:17.252 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:17.252 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:17.252 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:17.252 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.252 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.252 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.252 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:17.252 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:17.252 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:17.252 23:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.786 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:19.787 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:19.787 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:19.787 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:19.787 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:19.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:26:19.787 00:26:19.787 --- 10.0.0.2 ping statistics --- 00:26:19.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.787 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:26:19.787 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:19.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:26:19.788 00:26:19.788 --- 10.0.0.1 ping statistics --- 00:26:19.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.788 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1462402 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1462402 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1462402 ']' 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:19.788 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:19.788 [2024-07-25 23:32:17.246239] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:19.788 [2024-07-25 23:32:17.246317] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.788 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.788 [2024-07-25 23:32:17.283816] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:19.788 [2024-07-25 23:32:17.310514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:19.788 [2024-07-25 23:32:17.398161] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.788 [2024-07-25 23:32:17.398210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.788 [2024-07-25 23:32:17.398224] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.788 [2024-07-25 23:32:17.398236] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.788 [2024-07-25 23:32:17.398246] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.788 [2024-07-25 23:32:17.398337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.788 [2024-07-25 23:32:17.398410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:19.788 [2024-07-25 23:32:17.398476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:19.788 [2024-07-25 23:32:17.398478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:20.046 [2024-07-25 23:32:17.542224] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:20.046 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:20.047 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:20.047 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:20.047 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:20.047 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:20.047 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:20.047 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:20.047 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:20.047 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:20.047 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:20.047 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:20.047 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:20.047 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:20.047 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:20.047 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:20.047 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.047 23:32:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:20.047 Malloc1 00:26:20.047 [2024-07-25 23:32:17.617243] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.047 Malloc2 00:26:20.047 Malloc3 00:26:20.047 Malloc4 00:26:20.305 Malloc5 00:26:20.305 Malloc6 00:26:20.305 Malloc7 00:26:20.305 Malloc8 00:26:20.305 Malloc9 00:26:20.305 Malloc10 00:26:20.563 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.563 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:20.563 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:20.563 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:20.563 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1462581 00:26:20.563 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1462581 /var/tmp/bdevperf.sock 00:26:20.563 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1462581 ']' 00:26:20.563 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:20.563 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:20.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.564 { 00:26:20.564 "params": { 00:26:20.564 "name": "Nvme$subsystem", 00:26:20.564 "trtype": "$TEST_TRANSPORT", 00:26:20.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.564 "adrfam": "ipv4", 00:26:20.564 "trsvcid": "$NVMF_PORT", 00:26:20.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.564 "hdgst": ${hdgst:-false}, 00:26:20.564 "ddgst": ${ddgst:-false} 00:26:20.564 }, 00:26:20.564 "method": "bdev_nvme_attach_controller" 00:26:20.564 } 00:26:20.564 EOF 00:26:20.564 )") 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.564 { 00:26:20.564 "params": { 00:26:20.564 "name": "Nvme$subsystem", 00:26:20.564 "trtype": "$TEST_TRANSPORT", 00:26:20.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.564 "adrfam": "ipv4", 00:26:20.564 "trsvcid": "$NVMF_PORT", 00:26:20.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.564 "hdgst": ${hdgst:-false}, 00:26:20.564 "ddgst": ${ddgst:-false} 00:26:20.564 }, 00:26:20.564 "method": "bdev_nvme_attach_controller" 00:26:20.564 } 00:26:20.564 EOF 00:26:20.564 )") 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.564 { 00:26:20.564 "params": { 00:26:20.564 "name": "Nvme$subsystem", 00:26:20.564 "trtype": "$TEST_TRANSPORT", 00:26:20.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.564 "adrfam": "ipv4", 00:26:20.564 "trsvcid": "$NVMF_PORT", 00:26:20.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.564 "hdgst": ${hdgst:-false}, 00:26:20.564 "ddgst": ${ddgst:-false} 00:26:20.564 }, 00:26:20.564 "method": "bdev_nvme_attach_controller" 00:26:20.564 } 00:26:20.564 EOF 00:26:20.564 )") 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.564 { 00:26:20.564 "params": { 00:26:20.564 "name": "Nvme$subsystem", 00:26:20.564 "trtype": "$TEST_TRANSPORT", 00:26:20.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.564 "adrfam": "ipv4", 00:26:20.564 "trsvcid": "$NVMF_PORT", 00:26:20.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.564 "hdgst": ${hdgst:-false}, 00:26:20.564 "ddgst": ${ddgst:-false} 00:26:20.564 }, 00:26:20.564 "method": "bdev_nvme_attach_controller" 00:26:20.564 } 00:26:20.564 EOF 00:26:20.564 )") 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.564 { 00:26:20.564 "params": { 00:26:20.564 "name": "Nvme$subsystem", 00:26:20.564 "trtype": "$TEST_TRANSPORT", 00:26:20.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.564 "adrfam": "ipv4", 00:26:20.564 "trsvcid": "$NVMF_PORT", 00:26:20.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.564 "hdgst": ${hdgst:-false}, 00:26:20.564 "ddgst": ${ddgst:-false} 00:26:20.564 }, 00:26:20.564 "method": "bdev_nvme_attach_controller" 00:26:20.564 } 00:26:20.564 EOF 00:26:20.564 )") 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.564 { 00:26:20.564 "params": { 00:26:20.564 "name": "Nvme$subsystem", 00:26:20.564 "trtype": "$TEST_TRANSPORT", 00:26:20.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.564 "adrfam": "ipv4", 00:26:20.564 "trsvcid": "$NVMF_PORT", 00:26:20.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.564 "hdgst": ${hdgst:-false}, 00:26:20.564 "ddgst": ${ddgst:-false} 00:26:20.564 }, 00:26:20.564 "method": "bdev_nvme_attach_controller" 00:26:20.564 } 00:26:20.564 EOF 00:26:20.564 )") 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.564 { 00:26:20.564 "params": { 00:26:20.564 "name": "Nvme$subsystem", 00:26:20.564 "trtype": "$TEST_TRANSPORT", 00:26:20.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.564 "adrfam": "ipv4", 00:26:20.564 "trsvcid": "$NVMF_PORT", 00:26:20.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.564 "hdgst": ${hdgst:-false}, 00:26:20.564 "ddgst": ${ddgst:-false} 00:26:20.564 }, 00:26:20.564 "method": "bdev_nvme_attach_controller" 00:26:20.564 } 00:26:20.564 EOF 00:26:20.564 )") 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.564 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.565 { 00:26:20.565 "params": { 00:26:20.565 "name": "Nvme$subsystem", 00:26:20.565 "trtype": "$TEST_TRANSPORT", 00:26:20.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.565 "adrfam": "ipv4", 00:26:20.565 "trsvcid": "$NVMF_PORT", 00:26:20.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.565 "hdgst": ${hdgst:-false}, 00:26:20.565 "ddgst": ${ddgst:-false} 00:26:20.565 }, 00:26:20.565 "method": "bdev_nvme_attach_controller" 00:26:20.565 } 00:26:20.565 EOF 00:26:20.565 )") 00:26:20.565 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.565 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.565 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.565 { 00:26:20.565 "params": { 00:26:20.565 "name": "Nvme$subsystem", 00:26:20.565 "trtype": "$TEST_TRANSPORT", 00:26:20.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.565 "adrfam": "ipv4", 00:26:20.565 "trsvcid": "$NVMF_PORT", 00:26:20.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.565 "hdgst": ${hdgst:-false}, 00:26:20.565 "ddgst": ${ddgst:-false} 00:26:20.565 }, 00:26:20.565 "method": "bdev_nvme_attach_controller" 00:26:20.565 } 00:26:20.565 EOF 00:26:20.565 )") 00:26:20.565 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.565 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:20.565 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:20.565 { 00:26:20.565 "params": { 00:26:20.565 "name": "Nvme$subsystem", 00:26:20.565 "trtype": "$TEST_TRANSPORT", 00:26:20.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.565 "adrfam": "ipv4", 00:26:20.565 "trsvcid": "$NVMF_PORT", 00:26:20.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.565 "hdgst": ${hdgst:-false}, 00:26:20.565 "ddgst": ${ddgst:-false} 00:26:20.565 }, 00:26:20.565 "method": "bdev_nvme_attach_controller" 00:26:20.565 } 00:26:20.565 EOF 00:26:20.565 )") 00:26:20.565 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:20.565 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:20.565 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:20.565 23:32:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:20.565 "params": { 00:26:20.565 "name": "Nvme1", 00:26:20.565 "trtype": "tcp", 00:26:20.565 "traddr": "10.0.0.2", 00:26:20.565 "adrfam": "ipv4", 00:26:20.565 "trsvcid": "4420", 00:26:20.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:20.565 "hdgst": false, 00:26:20.565 "ddgst": false 00:26:20.565 }, 00:26:20.565 "method": "bdev_nvme_attach_controller" 00:26:20.565 },{ 00:26:20.565 "params": { 00:26:20.565 "name": "Nvme2", 00:26:20.565 "trtype": "tcp", 00:26:20.565 "traddr": "10.0.0.2", 00:26:20.565 "adrfam": "ipv4", 00:26:20.565 "trsvcid": "4420", 00:26:20.565 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:20.565 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:20.565 "hdgst": false, 00:26:20.565 "ddgst": false 00:26:20.565 }, 00:26:20.565 "method": "bdev_nvme_attach_controller" 00:26:20.565 },{ 00:26:20.565 "params": { 00:26:20.565 "name": "Nvme3", 00:26:20.565 "trtype": "tcp", 00:26:20.565 "traddr": "10.0.0.2", 00:26:20.565 "adrfam": "ipv4", 00:26:20.565 "trsvcid": "4420", 00:26:20.565 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:20.565 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:20.565 "hdgst": false, 00:26:20.565 "ddgst": false 00:26:20.565 }, 00:26:20.565 "method": "bdev_nvme_attach_controller" 00:26:20.565 },{ 00:26:20.565 "params": { 00:26:20.565 "name": "Nvme4", 00:26:20.565 "trtype": "tcp", 00:26:20.565 "traddr": "10.0.0.2", 00:26:20.565 "adrfam": "ipv4", 00:26:20.565 "trsvcid": "4420", 00:26:20.565 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:20.565 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:20.565 "hdgst": false, 00:26:20.565 "ddgst": false 00:26:20.565 }, 00:26:20.565 "method": "bdev_nvme_attach_controller" 00:26:20.565 },{ 00:26:20.565 "params": { 00:26:20.565 "name": "Nvme5", 00:26:20.565 "trtype": "tcp", 00:26:20.565 "traddr": "10.0.0.2", 00:26:20.565 "adrfam": "ipv4", 00:26:20.565 "trsvcid": "4420", 00:26:20.565 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:20.565 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:20.565 "hdgst": false, 00:26:20.565 "ddgst": false 00:26:20.565 }, 00:26:20.565 "method": "bdev_nvme_attach_controller" 00:26:20.565 },{ 00:26:20.565 "params": { 00:26:20.565 "name": "Nvme6", 00:26:20.565 "trtype": "tcp", 00:26:20.565 "traddr": "10.0.0.2", 00:26:20.565 "adrfam": "ipv4", 00:26:20.565 "trsvcid": "4420", 00:26:20.565 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:20.565 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:20.565 "hdgst": false, 00:26:20.565 "ddgst": false 00:26:20.565 }, 00:26:20.565 "method": "bdev_nvme_attach_controller" 00:26:20.565 },{ 00:26:20.565 "params": { 00:26:20.565 "name": "Nvme7", 00:26:20.565 "trtype": "tcp", 00:26:20.565 "traddr": "10.0.0.2", 00:26:20.565 "adrfam": "ipv4", 00:26:20.565 "trsvcid": "4420", 00:26:20.565 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:20.565 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:20.565 "hdgst": false, 00:26:20.565 "ddgst": false 00:26:20.565 }, 00:26:20.565 "method": "bdev_nvme_attach_controller" 00:26:20.565 },{ 00:26:20.565 "params": { 00:26:20.565 "name": "Nvme8", 00:26:20.565 "trtype": "tcp", 00:26:20.565 "traddr": "10.0.0.2", 00:26:20.565 "adrfam": "ipv4", 00:26:20.565 "trsvcid": "4420", 00:26:20.565 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:20.565 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:20.565 "hdgst": false, 00:26:20.565 "ddgst": false 00:26:20.565 }, 00:26:20.565 "method": "bdev_nvme_attach_controller" 00:26:20.565 },{ 00:26:20.565 "params": { 00:26:20.565 "name": "Nvme9", 00:26:20.565 "trtype": "tcp", 00:26:20.565 "traddr": "10.0.0.2", 00:26:20.565 "adrfam": "ipv4", 00:26:20.565 "trsvcid": "4420", 00:26:20.565 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:20.565 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:20.565 "hdgst": false, 00:26:20.565 "ddgst": false 00:26:20.565 }, 00:26:20.565 "method": "bdev_nvme_attach_controller" 00:26:20.565 },{ 00:26:20.565 "params": { 00:26:20.565 "name": "Nvme10", 00:26:20.565 "trtype": "tcp", 00:26:20.565 "traddr": "10.0.0.2", 00:26:20.565 "adrfam": "ipv4", 00:26:20.565 "trsvcid": "4420", 00:26:20.565 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:20.565 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:20.565 "hdgst": false, 00:26:20.565 "ddgst": false 00:26:20.565 }, 00:26:20.565 "method": "bdev_nvme_attach_controller" 00:26:20.565 }' 00:26:20.565 [2024-07-25 23:32:18.116533] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:20.565 [2024-07-25 23:32:18.116604] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:20.565 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.565 [2024-07-25 23:32:18.152112] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:20.566 [2024-07-25 23:32:18.181206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.566 [2024-07-25 23:32:18.267980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.512 23:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:22.512 23:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:22.512 23:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:22.512 23:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.512 23:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:22.512 23:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.512 23:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1462581 00:26:22.512 23:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:22.512 23:32:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:26:23.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1462581 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1462402 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.453 { 00:26:23.453 "params": { 00:26:23.453 "name": "Nvme$subsystem", 00:26:23.453 "trtype": "$TEST_TRANSPORT", 00:26:23.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.453 "adrfam": "ipv4", 00:26:23.453 "trsvcid": "$NVMF_PORT", 00:26:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.453 "hdgst": ${hdgst:-false}, 00:26:23.453 "ddgst": ${ddgst:-false} 00:26:23.453 }, 00:26:23.453 "method": "bdev_nvme_attach_controller" 00:26:23.453 } 00:26:23.453 EOF 00:26:23.453 )") 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.453 { 00:26:23.453 "params": { 00:26:23.453 "name": "Nvme$subsystem", 00:26:23.453 "trtype": "$TEST_TRANSPORT", 00:26:23.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.453 "adrfam": "ipv4", 00:26:23.453 "trsvcid": "$NVMF_PORT", 00:26:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.453 "hdgst": ${hdgst:-false}, 00:26:23.453 "ddgst": ${ddgst:-false} 00:26:23.453 }, 00:26:23.453 "method": "bdev_nvme_attach_controller" 00:26:23.453 } 00:26:23.453 EOF 00:26:23.453 )") 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.453 { 00:26:23.453 "params": { 00:26:23.453 "name": "Nvme$subsystem", 00:26:23.453 "trtype": "$TEST_TRANSPORT", 00:26:23.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.453 "adrfam": "ipv4", 00:26:23.453 "trsvcid": "$NVMF_PORT", 00:26:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.453 "hdgst": ${hdgst:-false}, 00:26:23.453 "ddgst": ${ddgst:-false} 00:26:23.453 }, 00:26:23.453 "method": "bdev_nvme_attach_controller" 00:26:23.453 } 00:26:23.453 EOF 00:26:23.453 )") 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.453 { 00:26:23.453 "params": { 00:26:23.453 "name": "Nvme$subsystem", 00:26:23.453 "trtype": "$TEST_TRANSPORT", 00:26:23.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.453 "adrfam": "ipv4", 00:26:23.453 "trsvcid": "$NVMF_PORT", 00:26:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.453 "hdgst": ${hdgst:-false}, 00:26:23.453 "ddgst": ${ddgst:-false} 00:26:23.453 }, 00:26:23.453 "method": "bdev_nvme_attach_controller" 00:26:23.453 } 00:26:23.453 EOF 00:26:23.453 )") 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.453 { 00:26:23.453 "params": { 00:26:23.453 "name": "Nvme$subsystem", 00:26:23.453 "trtype": "$TEST_TRANSPORT", 00:26:23.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.453 "adrfam": "ipv4", 00:26:23.453 "trsvcid": "$NVMF_PORT", 00:26:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.453 "hdgst": ${hdgst:-false}, 00:26:23.453 "ddgst": ${ddgst:-false} 00:26:23.453 }, 00:26:23.453 "method": "bdev_nvme_attach_controller" 00:26:23.453 } 00:26:23.453 EOF 00:26:23.453 )") 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.453 { 00:26:23.453 "params": { 00:26:23.453 "name": "Nvme$subsystem", 00:26:23.453 "trtype": "$TEST_TRANSPORT", 00:26:23.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.453 "adrfam": "ipv4", 00:26:23.453 "trsvcid": "$NVMF_PORT", 00:26:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.453 "hdgst": ${hdgst:-false}, 00:26:23.453 "ddgst": ${ddgst:-false} 00:26:23.453 }, 00:26:23.453 "method": "bdev_nvme_attach_controller" 00:26:23.453 } 00:26:23.453 EOF 00:26:23.453 )") 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.453 { 00:26:23.453 "params": { 00:26:23.453 "name": "Nvme$subsystem", 00:26:23.453 "trtype": "$TEST_TRANSPORT", 00:26:23.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.453 "adrfam": "ipv4", 00:26:23.453 "trsvcid": "$NVMF_PORT", 00:26:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.453 "hdgst": ${hdgst:-false}, 00:26:23.453 "ddgst": ${ddgst:-false} 00:26:23.453 }, 00:26:23.453 "method": "bdev_nvme_attach_controller" 00:26:23.453 } 00:26:23.453 EOF 00:26:23.453 )") 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.453 { 00:26:23.453 "params": { 00:26:23.453 "name": "Nvme$subsystem", 00:26:23.453 "trtype": "$TEST_TRANSPORT", 00:26:23.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.453 "adrfam": "ipv4", 00:26:23.453 "trsvcid": "$NVMF_PORT", 00:26:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.453 "hdgst": ${hdgst:-false}, 00:26:23.453 "ddgst": ${ddgst:-false} 00:26:23.453 }, 00:26:23.453 "method": "bdev_nvme_attach_controller" 00:26:23.453 } 00:26:23.453 EOF 00:26:23.453 )") 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.453 { 00:26:23.453 "params": { 00:26:23.453 "name": "Nvme$subsystem", 00:26:23.453 "trtype": "$TEST_TRANSPORT", 00:26:23.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.453 "adrfam": "ipv4", 00:26:23.453 "trsvcid": "$NVMF_PORT", 00:26:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.453 "hdgst": ${hdgst:-false}, 00:26:23.453 "ddgst": ${ddgst:-false} 00:26:23.453 }, 00:26:23.453 "method": "bdev_nvme_attach_controller" 00:26:23.453 } 00:26:23.453 EOF 00:26:23.453 )") 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.453 { 00:26:23.453 "params": { 00:26:23.453 "name": "Nvme$subsystem", 00:26:23.453 "trtype": "$TEST_TRANSPORT", 00:26:23.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.453 "adrfam": "ipv4", 00:26:23.453 "trsvcid": "$NVMF_PORT", 00:26:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.453 "hdgst": ${hdgst:-false}, 00:26:23.453 "ddgst": ${ddgst:-false} 00:26:23.453 }, 00:26:23.453 "method": "bdev_nvme_attach_controller" 00:26:23.453 } 00:26:23.453 EOF 00:26:23.453 )") 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:23.453 23:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:23.453 "params": { 00:26:23.453 "name": "Nvme1", 00:26:23.453 "trtype": "tcp", 00:26:23.453 "traddr": "10.0.0.2", 00:26:23.453 "adrfam": "ipv4", 00:26:23.453 "trsvcid": "4420", 00:26:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:23.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:23.453 "hdgst": false, 00:26:23.453 "ddgst": false 00:26:23.453 }, 00:26:23.453 "method": "bdev_nvme_attach_controller" 00:26:23.453 },{ 00:26:23.453 "params": { 00:26:23.453 "name": "Nvme2", 00:26:23.453 "trtype": "tcp", 00:26:23.453 "traddr": "10.0.0.2", 00:26:23.453 "adrfam": "ipv4", 00:26:23.453 "trsvcid": "4420", 00:26:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:23.453 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:23.453 "hdgst": false, 00:26:23.453 "ddgst": false 00:26:23.453 }, 00:26:23.453 "method": "bdev_nvme_attach_controller" 00:26:23.453 },{ 00:26:23.453 "params": { 00:26:23.453 "name": "Nvme3", 00:26:23.453 "trtype": "tcp", 00:26:23.453 "traddr": "10.0.0.2", 00:26:23.453 "adrfam": "ipv4", 00:26:23.453 "trsvcid": "4420", 00:26:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:23.453 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:23.453 "hdgst": false, 00:26:23.453 "ddgst": false 00:26:23.453 }, 00:26:23.453 "method": "bdev_nvme_attach_controller" 00:26:23.453 },{ 00:26:23.453 "params": { 00:26:23.453 "name": "Nvme4", 00:26:23.453 "trtype": "tcp", 00:26:23.453 "traddr": "10.0.0.2", 00:26:23.453 "adrfam": "ipv4", 00:26:23.453 "trsvcid": "4420", 00:26:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:23.453 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:23.453 "hdgst": false, 00:26:23.453 "ddgst": false 00:26:23.453 }, 00:26:23.453 "method": "bdev_nvme_attach_controller" 00:26:23.453 },{ 00:26:23.453 "params": { 00:26:23.453 "name": "Nvme5", 00:26:23.453 "trtype": "tcp", 00:26:23.453 "traddr": "10.0.0.2", 00:26:23.453 "adrfam": "ipv4", 00:26:23.453 "trsvcid": "4420", 00:26:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:23.453 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:23.453 "hdgst": false, 00:26:23.453 "ddgst": false 00:26:23.453 }, 00:26:23.453 "method": "bdev_nvme_attach_controller" 00:26:23.453 },{ 00:26:23.453 "params": { 00:26:23.453 "name": "Nvme6", 00:26:23.453 "trtype": "tcp", 00:26:23.453 "traddr": "10.0.0.2", 00:26:23.453 "adrfam": "ipv4", 00:26:23.453 "trsvcid": "4420", 00:26:23.453 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:23.453 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:23.453 "hdgst": false, 00:26:23.453 "ddgst": false 00:26:23.453 }, 00:26:23.453 "method": "bdev_nvme_attach_controller" 00:26:23.453 },{ 00:26:23.453 "params": { 00:26:23.453 "name": "Nvme7", 00:26:23.453 "trtype": "tcp", 00:26:23.454 "traddr": "10.0.0.2", 00:26:23.454 "adrfam": "ipv4", 00:26:23.454 "trsvcid": "4420", 00:26:23.454 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:23.454 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:23.454 "hdgst": false, 00:26:23.454 "ddgst": false 00:26:23.454 }, 00:26:23.454 "method": "bdev_nvme_attach_controller" 00:26:23.454 },{ 00:26:23.454 "params": { 00:26:23.454 "name": "Nvme8", 00:26:23.454 "trtype": "tcp", 00:26:23.454 "traddr": "10.0.0.2", 00:26:23.454 "adrfam": "ipv4", 00:26:23.454 "trsvcid": "4420", 00:26:23.454 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:23.454 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:23.454 "hdgst": false, 00:26:23.454 "ddgst": false 00:26:23.454 }, 00:26:23.454 "method": "bdev_nvme_attach_controller" 00:26:23.454 },{ 00:26:23.454 "params": { 00:26:23.454 "name": "Nvme9", 00:26:23.454 "trtype": "tcp", 00:26:23.454 "traddr": "10.0.0.2", 00:26:23.454 "adrfam": "ipv4", 00:26:23.454 "trsvcid": "4420", 00:26:23.454 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:23.454 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:23.454 "hdgst": false, 00:26:23.454 "ddgst": false 00:26:23.454 }, 00:26:23.454 "method": "bdev_nvme_attach_controller" 00:26:23.454 },{ 00:26:23.454 "params": { 00:26:23.454 "name": "Nvme10", 00:26:23.454 "trtype": "tcp", 00:26:23.454 "traddr": "10.0.0.2", 00:26:23.454 "adrfam": "ipv4", 00:26:23.454 "trsvcid": "4420", 00:26:23.454 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:23.454 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:23.454 "hdgst": false, 00:26:23.454 "ddgst": false 00:26:23.454 }, 00:26:23.454 "method": "bdev_nvme_attach_controller" 00:26:23.454 }' 00:26:23.454 [2024-07-25 23:32:21.123704] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:23.454 [2024-07-25 23:32:21.123784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463001 ] 00:26:23.454 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.454 [2024-07-25 23:32:21.161159] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:23.711 [2024-07-25 23:32:21.191730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.711 [2024-07-25 23:32:21.279066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.088 Running I/O for 1 seconds... 00:26:26.463 00:26:26.464 Latency(us) 00:26:26.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.464 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.464 Verification LBA range: start 0x0 length 0x400 00:26:26.464 Nvme1n1 : 1.09 251.26 15.70 0.00 0.00 244490.21 12621.75 246997.90 00:26:26.464 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.464 Verification LBA range: start 0x0 length 0x400 00:26:26.464 Nvme2n1 : 1.14 224.63 14.04 0.00 0.00 277515.38 34175.81 233016.89 00:26:26.464 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.464 Verification LBA range: start 0x0 length 0x400 00:26:26.464 Nvme3n1 : 1.17 273.70 17.11 0.00 0.00 222827.25 15534.46 251658.24 00:26:26.464 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.464 Verification LBA range: start 0x0 length 0x400 00:26:26.464 Nvme4n1 : 1.10 232.99 14.56 0.00 0.00 258258.87 20097.71 259425.47 00:26:26.464 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.464 Verification LBA range: start 0x0 length 0x400 00:26:26.464 Nvme5n1 : 1.13 226.42 14.15 0.00 0.00 261450.90 20388.98 250104.79 00:26:26.464 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.464 Verification LBA range: start 0x0 length 0x400 00:26:26.464 Nvme6n1 : 1.17 217.92 13.62 0.00 0.00 267958.80 22136.60 262532.36 00:26:26.464 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.464 Verification LBA range: start 0x0 length 0x400 00:26:26.464 Nvme7n1 : 1.19 269.65 16.85 0.00 0.00 213164.18 17767.54 259425.47 00:26:26.464 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.464 Verification LBA range: start 0x0 length 0x400 00:26:26.464 Nvme8n1 : 1.14 225.12 14.07 0.00 0.00 249611.76 19709.35 237677.23 00:26:26.464 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.464 Verification LBA range: start 0x0 length 0x400 00:26:26.464 Nvme9n1 : 1.19 268.03 16.75 0.00 0.00 207420.23 12621.75 268746.15 00:26:26.464 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:26.464 Verification LBA range: start 0x0 length 0x400 00:26:26.464 Nvme10n1 : 1.18 216.72 13.54 0.00 0.00 251765.38 18544.26 285834.05 00:26:26.464 =================================================================================================================== 00:26:26.464 Total : 2406.46 150.40 0.00 0.00 243293.03 12621.75 285834.05 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:26.464 rmmod nvme_tcp 00:26:26.464 rmmod nvme_fabrics 00:26:26.464 rmmod nvme_keyring 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1462402 ']' 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1462402 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1462402 ']' 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1462402 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1462402 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1462402' 00:26:26.464 killing process with pid 1462402 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1462402 00:26:26.464 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1462402 00:26:27.032 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:27.032 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:27.032 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:27.032 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:27.032 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:27.032 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.032 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.033 23:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.978 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:28.978 00:26:28.978 real 0m11.783s 00:26:28.978 user 0m33.391s 00:26:28.978 sys 0m3.214s 00:26:28.978 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:28.978 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:28.978 ************************************ 00:26:28.978 END TEST nvmf_shutdown_tc1 00:26:28.978 ************************************ 00:26:28.978 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:28.978 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:28.978 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:28.978 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:29.237 ************************************ 00:26:29.237 START TEST nvmf_shutdown_tc2 00:26:29.237 ************************************ 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:29.237 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:29.237 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:29.237 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:29.237 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:29.237 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:29.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:26:29.238 00:26:29.238 --- 10.0.0.2 ping statistics --- 00:26:29.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.238 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:29.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:26:29.238 00:26:29.238 --- 10.0.0.1 ping statistics --- 00:26:29.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.238 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1463775 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1463775 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1463775 ']' 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:29.238 23:32:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.496 [2024-07-25 23:32:26.963599] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:29.496 [2024-07-25 23:32:26.963673] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.496 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.496 [2024-07-25 23:32:27.002076] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:29.496 [2024-07-25 23:32:27.028971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:29.496 [2024-07-25 23:32:27.114718] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.496 [2024-07-25 23:32:27.114771] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.496 [2024-07-25 23:32:27.114794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.496 [2024-07-25 23:32:27.114806] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.496 [2024-07-25 23:32:27.114815] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.496 [2024-07-25 23:32:27.114898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.496 [2024-07-25 23:32:27.114960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:29.496 [2024-07-25 23:32:27.115026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:29.496 [2024-07-25 23:32:27.115028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.754 [2024-07-25 23:32:27.258224] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:29.754 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:29.755 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:29.755 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:29.755 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:29.755 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:29.755 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:29.755 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:29.755 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:29.755 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:29.755 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:29.755 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:29.755 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:29.755 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:26:29.755 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:29.755 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.755 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.755 Malloc1 00:26:29.755 [2024-07-25 23:32:27.333232] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.755 Malloc2 00:26:29.755 Malloc3 00:26:29.755 Malloc4 00:26:30.013 Malloc5 00:26:30.013 Malloc6 00:26:30.013 Malloc7 00:26:30.013 Malloc8 00:26:30.013 Malloc9 00:26:30.271 Malloc10 00:26:30.271 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.271 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:30.271 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:30.271 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.271 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1463952 00:26:30.271 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1463952 /var/tmp/bdevperf.sock 00:26:30.271 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1463952 ']' 00:26:30.271 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:30.271 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:30.271 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:30.271 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:30.271 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:26:30.271 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:30.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.272 { 00:26:30.272 "params": { 00:26:30.272 "name": "Nvme$subsystem", 00:26:30.272 "trtype": "$TEST_TRANSPORT", 00:26:30.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.272 "adrfam": "ipv4", 00:26:30.272 "trsvcid": "$NVMF_PORT", 00:26:30.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.272 "hdgst": ${hdgst:-false}, 00:26:30.272 "ddgst": ${ddgst:-false} 00:26:30.272 }, 00:26:30.272 "method": "bdev_nvme_attach_controller" 00:26:30.272 } 00:26:30.272 EOF 00:26:30.272 )") 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.272 { 00:26:30.272 "params": { 00:26:30.272 "name": "Nvme$subsystem", 00:26:30.272 "trtype": "$TEST_TRANSPORT", 00:26:30.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.272 "adrfam": "ipv4", 00:26:30.272 "trsvcid": "$NVMF_PORT", 00:26:30.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.272 "hdgst": ${hdgst:-false}, 00:26:30.272 "ddgst": ${ddgst:-false} 00:26:30.272 }, 00:26:30.272 "method": "bdev_nvme_attach_controller" 00:26:30.272 } 00:26:30.272 EOF 00:26:30.272 )") 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.272 { 00:26:30.272 "params": { 00:26:30.272 "name": "Nvme$subsystem", 00:26:30.272 "trtype": "$TEST_TRANSPORT", 00:26:30.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.272 "adrfam": "ipv4", 00:26:30.272 "trsvcid": "$NVMF_PORT", 00:26:30.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.272 "hdgst": ${hdgst:-false}, 00:26:30.272 "ddgst": ${ddgst:-false} 00:26:30.272 }, 00:26:30.272 "method": "bdev_nvme_attach_controller" 00:26:30.272 } 00:26:30.272 EOF 00:26:30.272 )") 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.272 { 00:26:30.272 "params": { 00:26:30.272 "name": "Nvme$subsystem", 00:26:30.272 "trtype": "$TEST_TRANSPORT", 00:26:30.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.272 "adrfam": "ipv4", 00:26:30.272 "trsvcid": "$NVMF_PORT", 00:26:30.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.272 "hdgst": ${hdgst:-false}, 00:26:30.272 "ddgst": ${ddgst:-false} 00:26:30.272 }, 00:26:30.272 "method": "bdev_nvme_attach_controller" 00:26:30.272 } 00:26:30.272 EOF 00:26:30.272 )") 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.272 { 00:26:30.272 "params": { 00:26:30.272 "name": "Nvme$subsystem", 00:26:30.272 "trtype": "$TEST_TRANSPORT", 00:26:30.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.272 "adrfam": "ipv4", 00:26:30.272 "trsvcid": "$NVMF_PORT", 00:26:30.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.272 "hdgst": ${hdgst:-false}, 00:26:30.272 "ddgst": ${ddgst:-false} 00:26:30.272 }, 00:26:30.272 "method": "bdev_nvme_attach_controller" 00:26:30.272 } 00:26:30.272 EOF 00:26:30.272 )") 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.272 { 00:26:30.272 "params": { 00:26:30.272 "name": "Nvme$subsystem", 00:26:30.272 "trtype": "$TEST_TRANSPORT", 00:26:30.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.272 "adrfam": "ipv4", 00:26:30.272 "trsvcid": "$NVMF_PORT", 00:26:30.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.272 "hdgst": ${hdgst:-false}, 00:26:30.272 "ddgst": ${ddgst:-false} 00:26:30.272 }, 00:26:30.272 "method": "bdev_nvme_attach_controller" 00:26:30.272 } 00:26:30.272 EOF 00:26:30.272 )") 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.272 { 00:26:30.272 "params": { 00:26:30.272 "name": "Nvme$subsystem", 00:26:30.272 "trtype": "$TEST_TRANSPORT", 00:26:30.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.272 "adrfam": "ipv4", 00:26:30.272 "trsvcid": "$NVMF_PORT", 00:26:30.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.272 "hdgst": ${hdgst:-false}, 00:26:30.272 "ddgst": ${ddgst:-false} 00:26:30.272 }, 00:26:30.272 "method": "bdev_nvme_attach_controller" 00:26:30.272 } 00:26:30.272 EOF 00:26:30.272 )") 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.272 { 00:26:30.272 "params": { 00:26:30.272 "name": "Nvme$subsystem", 00:26:30.272 "trtype": "$TEST_TRANSPORT", 00:26:30.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.272 "adrfam": "ipv4", 00:26:30.272 "trsvcid": "$NVMF_PORT", 00:26:30.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.272 "hdgst": ${hdgst:-false}, 00:26:30.272 "ddgst": ${ddgst:-false} 00:26:30.272 }, 00:26:30.272 "method": "bdev_nvme_attach_controller" 00:26:30.272 } 00:26:30.272 EOF 00:26:30.272 )") 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.272 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.272 { 00:26:30.272 "params": { 00:26:30.272 "name": "Nvme$subsystem", 00:26:30.272 "trtype": "$TEST_TRANSPORT", 00:26:30.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.272 "adrfam": "ipv4", 00:26:30.272 "trsvcid": "$NVMF_PORT", 00:26:30.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.272 "hdgst": ${hdgst:-false}, 00:26:30.272 "ddgst": ${ddgst:-false} 00:26:30.272 }, 00:26:30.272 "method": "bdev_nvme_attach_controller" 00:26:30.272 } 00:26:30.272 EOF 00:26:30.272 )") 00:26:30.273 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:30.273 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.273 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.273 { 00:26:30.273 "params": { 00:26:30.273 "name": "Nvme$subsystem", 00:26:30.273 "trtype": "$TEST_TRANSPORT", 00:26:30.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.273 "adrfam": "ipv4", 00:26:30.273 "trsvcid": "$NVMF_PORT", 00:26:30.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.273 "hdgst": ${hdgst:-false}, 00:26:30.273 "ddgst": ${ddgst:-false} 00:26:30.273 }, 00:26:30.273 "method": "bdev_nvme_attach_controller" 00:26:30.273 } 00:26:30.273 EOF 00:26:30.273 )") 00:26:30.273 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:26:30.273 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:26:30.273 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:26:30.273 23:32:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:30.273 "params": { 00:26:30.273 "name": "Nvme1", 00:26:30.273 "trtype": "tcp", 00:26:30.273 "traddr": "10.0.0.2", 00:26:30.273 "adrfam": "ipv4", 00:26:30.273 "trsvcid": "4420", 00:26:30.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:30.273 "hdgst": false, 00:26:30.273 "ddgst": false 00:26:30.273 }, 00:26:30.273 "method": "bdev_nvme_attach_controller" 00:26:30.273 },{ 00:26:30.273 "params": { 00:26:30.273 "name": "Nvme2", 00:26:30.273 "trtype": "tcp", 00:26:30.273 "traddr": "10.0.0.2", 00:26:30.273 "adrfam": "ipv4", 00:26:30.273 "trsvcid": "4420", 00:26:30.273 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:30.273 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:30.273 "hdgst": false, 00:26:30.273 "ddgst": false 00:26:30.273 }, 00:26:30.273 "method": "bdev_nvme_attach_controller" 00:26:30.273 },{ 00:26:30.273 "params": { 00:26:30.273 "name": "Nvme3", 00:26:30.273 "trtype": "tcp", 00:26:30.273 "traddr": "10.0.0.2", 00:26:30.273 "adrfam": "ipv4", 00:26:30.273 "trsvcid": "4420", 00:26:30.273 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:30.273 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:30.273 "hdgst": false, 00:26:30.273 "ddgst": false 00:26:30.273 }, 00:26:30.273 "method": "bdev_nvme_attach_controller" 00:26:30.273 },{ 00:26:30.273 "params": { 00:26:30.273 "name": "Nvme4", 00:26:30.273 "trtype": "tcp", 00:26:30.273 "traddr": "10.0.0.2", 00:26:30.273 "adrfam": "ipv4", 00:26:30.273 "trsvcid": "4420", 00:26:30.273 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:30.273 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:30.273 "hdgst": false, 00:26:30.273 "ddgst": false 00:26:30.273 }, 00:26:30.273 "method": "bdev_nvme_attach_controller" 00:26:30.273 },{ 00:26:30.273 "params": { 00:26:30.273 "name": "Nvme5", 00:26:30.273 "trtype": "tcp", 00:26:30.273 "traddr": "10.0.0.2", 00:26:30.273 "adrfam": "ipv4", 00:26:30.273 "trsvcid": "4420", 00:26:30.273 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:30.273 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:30.273 "hdgst": false, 00:26:30.273 "ddgst": false 00:26:30.273 }, 00:26:30.273 "method": "bdev_nvme_attach_controller" 00:26:30.273 },{ 00:26:30.273 "params": { 00:26:30.273 "name": "Nvme6", 00:26:30.273 "trtype": "tcp", 00:26:30.273 "traddr": "10.0.0.2", 00:26:30.273 "adrfam": "ipv4", 00:26:30.273 "trsvcid": "4420", 00:26:30.273 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:30.273 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:30.273 "hdgst": false, 00:26:30.273 "ddgst": false 00:26:30.273 }, 00:26:30.273 "method": "bdev_nvme_attach_controller" 00:26:30.273 },{ 00:26:30.273 "params": { 00:26:30.273 "name": "Nvme7", 00:26:30.273 "trtype": "tcp", 00:26:30.273 "traddr": "10.0.0.2", 00:26:30.273 "adrfam": "ipv4", 00:26:30.273 "trsvcid": "4420", 00:26:30.273 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:30.273 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:30.273 "hdgst": false, 00:26:30.273 "ddgst": false 00:26:30.273 }, 00:26:30.273 "method": "bdev_nvme_attach_controller" 00:26:30.273 },{ 00:26:30.273 "params": { 00:26:30.273 "name": "Nvme8", 00:26:30.273 "trtype": "tcp", 00:26:30.273 "traddr": "10.0.0.2", 00:26:30.273 "adrfam": "ipv4", 00:26:30.273 "trsvcid": "4420", 00:26:30.273 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:30.273 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:30.273 "hdgst": false, 00:26:30.273 "ddgst": false 00:26:30.273 }, 00:26:30.273 "method": "bdev_nvme_attach_controller" 00:26:30.273 },{ 00:26:30.273 "params": { 00:26:30.273 "name": "Nvme9", 00:26:30.273 "trtype": "tcp", 00:26:30.273 "traddr": "10.0.0.2", 00:26:30.273 "adrfam": "ipv4", 00:26:30.273 "trsvcid": "4420", 00:26:30.273 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:30.273 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:30.273 "hdgst": false, 00:26:30.273 "ddgst": false 00:26:30.273 }, 00:26:30.273 "method": "bdev_nvme_attach_controller" 00:26:30.273 },{ 00:26:30.273 "params": { 00:26:30.273 "name": "Nvme10", 00:26:30.273 "trtype": "tcp", 00:26:30.273 "traddr": "10.0.0.2", 00:26:30.273 "adrfam": "ipv4", 00:26:30.273 "trsvcid": "4420", 00:26:30.273 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:30.273 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:30.273 "hdgst": false, 00:26:30.273 "ddgst": false 00:26:30.273 }, 00:26:30.273 "method": "bdev_nvme_attach_controller" 00:26:30.273 }' 00:26:30.273 [2024-07-25 23:32:27.833503] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:30.273 [2024-07-25 23:32:27.833591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463952 ] 00:26:30.273 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.273 [2024-07-25 23:32:27.867597] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:30.273 [2024-07-25 23:32:27.896857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.273 [2024-07-25 23:32:27.983718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.180 Running I/O for 10 seconds... 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:26:32.180 23:32:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:32.439 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:32.439 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:32.439 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:32.439 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:32.439 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.439 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:32.439 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.439 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:26:32.439 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:26:32.439 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:32.697 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:32.697 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:32.697 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:32.697 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:32.698 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.698 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:32.956 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.956 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:26:32.956 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:26:32.956 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:26:32.956 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:26:32.956 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:26:32.956 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1463952 00:26:32.956 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1463952 ']' 00:26:32.956 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1463952 00:26:32.956 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:32.956 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:32.956 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1463952 00:26:32.956 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:32.956 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:32.956 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1463952' 00:26:32.956 killing process with pid 1463952 00:26:32.956 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1463952 00:26:32.956 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1463952 00:26:32.956 Received shutdown signal, test time was about 0.981288 seconds 00:26:32.956 00:26:32.956 Latency(us) 00:26:32.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.956 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.956 Verification LBA range: start 0x0 length 0x400 00:26:32.956 Nvme1n1 : 0.98 261.10 16.32 0.00 0.00 242344.39 21456.97 267192.70 00:26:32.956 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.956 Verification LBA range: start 0x0 length 0x400 00:26:32.956 Nvme2n1 : 0.95 202.40 12.65 0.00 0.00 306477.38 19903.53 264085.81 00:26:32.956 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.956 Verification LBA range: start 0x0 length 0x400 00:26:32.956 Nvme3n1 : 0.97 263.53 16.47 0.00 0.00 230754.99 18447.17 257872.02 00:26:32.956 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.956 Verification LBA range: start 0x0 length 0x400 00:26:32.956 Nvme4n1 : 0.96 265.88 16.62 0.00 0.00 223897.79 16602.45 260978.92 00:26:32.956 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.957 Verification LBA range: start 0x0 length 0x400 00:26:32.957 Nvme5n1 : 0.96 205.07 12.82 0.00 0.00 282123.30 9126.49 257872.02 00:26:32.957 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.957 Verification LBA range: start 0x0 length 0x400 00:26:32.957 Nvme6n1 : 0.98 262.53 16.41 0.00 0.00 217995.57 17476.27 256318.58 00:26:32.957 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.957 Verification LBA range: start 0x0 length 0x400 00:26:32.957 Nvme7n1 : 0.94 208.85 13.05 0.00 0.00 265238.26 3713.71 264085.81 00:26:32.957 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.957 Verification LBA range: start 0x0 length 0x400 00:26:32.957 Nvme8n1 : 0.93 211.27 13.20 0.00 0.00 255514.81 3495.25 259425.47 00:26:32.957 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.957 Verification LBA range: start 0x0 length 0x400 00:26:32.957 Nvme9n1 : 0.97 198.52 12.41 0.00 0.00 270144.35 22427.88 296708.17 00:26:32.957 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:32.957 Verification LBA range: start 0x0 length 0x400 00:26:32.957 Nvme10n1 : 0.96 205.88 12.87 0.00 0.00 253318.33 4975.88 262532.36 00:26:32.957 =================================================================================================================== 00:26:32.957 Total : 2285.02 142.81 0.00 0.00 251830.54 3495.25 296708.17 00:26:33.215 23:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1463775 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:34.152 rmmod nvme_tcp 00:26:34.152 rmmod nvme_fabrics 00:26:34.152 rmmod nvme_keyring 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1463775 ']' 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1463775 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1463775 ']' 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1463775 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:34.152 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1463775 00:26:34.153 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:34.153 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:34.153 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1463775' 00:26:34.153 killing process with pid 1463775 00:26:34.153 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1463775 00:26:34.153 23:32:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1463775 00:26:34.722 23:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:34.722 23:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:34.722 23:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:34.722 23:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:34.722 23:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:34.722 23:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.722 23:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:34.722 23:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.253 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:37.253 00:26:37.253 real 0m7.709s 00:26:37.253 user 0m23.343s 00:26:37.253 sys 0m1.535s 00:26:37.253 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:37.253 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:37.253 ************************************ 00:26:37.253 END TEST nvmf_shutdown_tc2 00:26:37.253 ************************************ 00:26:37.253 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:37.253 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:37.253 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:37.253 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:37.253 ************************************ 00:26:37.253 START TEST nvmf_shutdown_tc3 00:26:37.253 ************************************ 00:26:37.253 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:26:37.253 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:26:37.253 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:37.253 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:37.254 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:37.254 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:37.254 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:37.254 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.254 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:37.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:37.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:26:37.255 00:26:37.255 --- 10.0.0.2 ping statistics --- 00:26:37.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.255 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:37.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:37.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:26:37.255 00:26:37.255 --- 10.0.0.1 ping statistics --- 00:26:37.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.255 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1464867 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1464867 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1464867 ']' 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:37.255 [2024-07-25 23:32:34.697846] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:37.255 [2024-07-25 23:32:34.697915] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.255 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.255 [2024-07-25 23:32:34.734166] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:37.255 [2024-07-25 23:32:34.765447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:37.255 [2024-07-25 23:32:34.858715] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.255 [2024-07-25 23:32:34.858776] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.255 [2024-07-25 23:32:34.858793] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:37.255 [2024-07-25 23:32:34.858807] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:37.255 [2024-07-25 23:32:34.858818] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.255 [2024-07-25 23:32:34.858881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:37.255 [2024-07-25 23:32:34.859003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:37.255 [2024-07-25 23:32:34.859084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:37.255 [2024-07-25 23:32:34.859087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:37.255 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:37.513 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:37.513 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:37.513 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.513 23:32:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:37.513 [2024-07-25 23:32:34.995195] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:37.513 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.513 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:37.513 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:37.513 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:37.513 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:37.513 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:37.513 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:37.513 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:37.513 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:37.513 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:37.513 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:37.513 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:37.513 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:37.514 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:37.514 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:37.514 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:37.514 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:37.514 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:37.514 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:37.514 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:37.514 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:37.514 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:37.514 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:37.514 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:37.514 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:37.514 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:26:37.514 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:37.514 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.514 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:37.514 Malloc1 00:26:37.514 [2024-07-25 23:32:35.070116] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:37.514 Malloc2 00:26:37.514 Malloc3 00:26:37.514 Malloc4 00:26:37.514 Malloc5 00:26:37.773 Malloc6 00:26:37.773 Malloc7 00:26:37.773 Malloc8 00:26:37.773 Malloc9 00:26:37.773 Malloc10 00:26:38.031 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.031 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:38.031 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:38.031 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:38.031 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1464932 00:26:38.031 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1464932 /var/tmp/bdevperf.sock 00:26:38.031 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1464932 ']' 00:26:38.031 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:38.031 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:38.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.032 { 00:26:38.032 "params": { 00:26:38.032 "name": "Nvme$subsystem", 00:26:38.032 "trtype": "$TEST_TRANSPORT", 00:26:38.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.032 "adrfam": "ipv4", 00:26:38.032 "trsvcid": "$NVMF_PORT", 00:26:38.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.032 "hdgst": ${hdgst:-false}, 00:26:38.032 "ddgst": ${ddgst:-false} 00:26:38.032 }, 00:26:38.032 "method": "bdev_nvme_attach_controller" 00:26:38.032 } 00:26:38.032 EOF 00:26:38.032 )") 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.032 { 00:26:38.032 "params": { 00:26:38.032 "name": "Nvme$subsystem", 00:26:38.032 "trtype": "$TEST_TRANSPORT", 00:26:38.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.032 "adrfam": "ipv4", 00:26:38.032 "trsvcid": "$NVMF_PORT", 00:26:38.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.032 "hdgst": ${hdgst:-false}, 00:26:38.032 "ddgst": ${ddgst:-false} 00:26:38.032 }, 00:26:38.032 "method": "bdev_nvme_attach_controller" 00:26:38.032 } 00:26:38.032 EOF 00:26:38.032 )") 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.032 { 00:26:38.032 "params": { 00:26:38.032 "name": "Nvme$subsystem", 00:26:38.032 "trtype": "$TEST_TRANSPORT", 00:26:38.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.032 "adrfam": "ipv4", 00:26:38.032 "trsvcid": "$NVMF_PORT", 00:26:38.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.032 "hdgst": ${hdgst:-false}, 00:26:38.032 "ddgst": ${ddgst:-false} 00:26:38.032 }, 00:26:38.032 "method": "bdev_nvme_attach_controller" 00:26:38.032 } 00:26:38.032 EOF 00:26:38.032 )") 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.032 { 00:26:38.032 "params": { 00:26:38.032 "name": "Nvme$subsystem", 00:26:38.032 "trtype": "$TEST_TRANSPORT", 00:26:38.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.032 "adrfam": "ipv4", 00:26:38.032 "trsvcid": "$NVMF_PORT", 00:26:38.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.032 "hdgst": ${hdgst:-false}, 00:26:38.032 "ddgst": ${ddgst:-false} 00:26:38.032 }, 00:26:38.032 "method": "bdev_nvme_attach_controller" 00:26:38.032 } 00:26:38.032 EOF 00:26:38.032 )") 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.032 { 00:26:38.032 "params": { 00:26:38.032 "name": "Nvme$subsystem", 00:26:38.032 "trtype": "$TEST_TRANSPORT", 00:26:38.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.032 "adrfam": "ipv4", 00:26:38.032 "trsvcid": "$NVMF_PORT", 00:26:38.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.032 "hdgst": ${hdgst:-false}, 00:26:38.032 "ddgst": ${ddgst:-false} 00:26:38.032 }, 00:26:38.032 "method": "bdev_nvme_attach_controller" 00:26:38.032 } 00:26:38.032 EOF 00:26:38.032 )") 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.032 { 00:26:38.032 "params": { 00:26:38.032 "name": "Nvme$subsystem", 00:26:38.032 "trtype": "$TEST_TRANSPORT", 00:26:38.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.032 "adrfam": "ipv4", 00:26:38.032 "trsvcid": "$NVMF_PORT", 00:26:38.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.032 "hdgst": ${hdgst:-false}, 00:26:38.032 "ddgst": ${ddgst:-false} 00:26:38.032 }, 00:26:38.032 "method": "bdev_nvme_attach_controller" 00:26:38.032 } 00:26:38.032 EOF 00:26:38.032 )") 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.032 { 00:26:38.032 "params": { 00:26:38.032 "name": "Nvme$subsystem", 00:26:38.032 "trtype": "$TEST_TRANSPORT", 00:26:38.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.032 "adrfam": "ipv4", 00:26:38.032 "trsvcid": "$NVMF_PORT", 00:26:38.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.032 "hdgst": ${hdgst:-false}, 00:26:38.032 "ddgst": ${ddgst:-false} 00:26:38.032 }, 00:26:38.032 "method": "bdev_nvme_attach_controller" 00:26:38.032 } 00:26:38.032 EOF 00:26:38.032 )") 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.032 { 00:26:38.032 "params": { 00:26:38.032 "name": "Nvme$subsystem", 00:26:38.032 "trtype": "$TEST_TRANSPORT", 00:26:38.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.032 "adrfam": "ipv4", 00:26:38.032 "trsvcid": "$NVMF_PORT", 00:26:38.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.032 "hdgst": ${hdgst:-false}, 00:26:38.032 "ddgst": ${ddgst:-false} 00:26:38.032 }, 00:26:38.032 "method": "bdev_nvme_attach_controller" 00:26:38.032 } 00:26:38.032 EOF 00:26:38.032 )") 00:26:38.032 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:38.033 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.033 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.033 { 00:26:38.033 "params": { 00:26:38.033 "name": "Nvme$subsystem", 00:26:38.033 "trtype": "$TEST_TRANSPORT", 00:26:38.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.033 "adrfam": "ipv4", 00:26:38.033 "trsvcid": "$NVMF_PORT", 00:26:38.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.033 "hdgst": ${hdgst:-false}, 00:26:38.033 "ddgst": ${ddgst:-false} 00:26:38.033 }, 00:26:38.033 "method": "bdev_nvme_attach_controller" 00:26:38.033 } 00:26:38.033 EOF 00:26:38.033 )") 00:26:38.033 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:38.033 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.033 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.033 { 00:26:38.033 "params": { 00:26:38.033 "name": "Nvme$subsystem", 00:26:38.033 "trtype": "$TEST_TRANSPORT", 00:26:38.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.033 "adrfam": "ipv4", 00:26:38.033 "trsvcid": "$NVMF_PORT", 00:26:38.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.033 "hdgst": ${hdgst:-false}, 00:26:38.033 "ddgst": ${ddgst:-false} 00:26:38.033 }, 00:26:38.033 "method": "bdev_nvme_attach_controller" 00:26:38.033 } 00:26:38.033 EOF 00:26:38.033 )") 00:26:38.033 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:26:38.033 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:26:38.033 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:26:38.033 23:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:38.033 "params": { 00:26:38.033 "name": "Nvme1", 00:26:38.033 "trtype": "tcp", 00:26:38.033 "traddr": "10.0.0.2", 00:26:38.033 "adrfam": "ipv4", 00:26:38.033 "trsvcid": "4420", 00:26:38.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:38.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:38.033 "hdgst": false, 00:26:38.033 "ddgst": false 00:26:38.033 }, 00:26:38.033 "method": "bdev_nvme_attach_controller" 00:26:38.033 },{ 00:26:38.033 "params": { 00:26:38.033 "name": "Nvme2", 00:26:38.033 "trtype": "tcp", 00:26:38.033 "traddr": "10.0.0.2", 00:26:38.033 "adrfam": "ipv4", 00:26:38.033 "trsvcid": "4420", 00:26:38.033 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:38.033 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:38.033 "hdgst": false, 00:26:38.033 "ddgst": false 00:26:38.033 }, 00:26:38.033 "method": "bdev_nvme_attach_controller" 00:26:38.033 },{ 00:26:38.033 "params": { 00:26:38.033 "name": "Nvme3", 00:26:38.033 "trtype": "tcp", 00:26:38.033 "traddr": "10.0.0.2", 00:26:38.033 "adrfam": "ipv4", 00:26:38.033 "trsvcid": "4420", 00:26:38.033 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:38.033 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:38.033 "hdgst": false, 00:26:38.033 "ddgst": false 00:26:38.033 }, 00:26:38.033 "method": "bdev_nvme_attach_controller" 00:26:38.033 },{ 00:26:38.033 "params": { 00:26:38.033 "name": "Nvme4", 00:26:38.033 "trtype": "tcp", 00:26:38.033 "traddr": "10.0.0.2", 00:26:38.033 "adrfam": "ipv4", 00:26:38.033 "trsvcid": "4420", 00:26:38.033 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:38.033 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:38.033 "hdgst": false, 00:26:38.033 "ddgst": false 00:26:38.033 }, 00:26:38.033 "method": "bdev_nvme_attach_controller" 00:26:38.033 },{ 00:26:38.033 "params": { 00:26:38.033 "name": "Nvme5", 00:26:38.033 "trtype": "tcp", 00:26:38.033 "traddr": "10.0.0.2", 00:26:38.033 "adrfam": "ipv4", 00:26:38.033 "trsvcid": "4420", 00:26:38.033 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:38.033 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:38.033 "hdgst": false, 00:26:38.033 "ddgst": false 00:26:38.033 }, 00:26:38.033 "method": "bdev_nvme_attach_controller" 00:26:38.033 },{ 00:26:38.033 "params": { 00:26:38.033 "name": "Nvme6", 00:26:38.033 "trtype": "tcp", 00:26:38.033 "traddr": "10.0.0.2", 00:26:38.033 "adrfam": "ipv4", 00:26:38.033 "trsvcid": "4420", 00:26:38.033 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:38.033 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:38.033 "hdgst": false, 00:26:38.033 "ddgst": false 00:26:38.033 }, 00:26:38.033 "method": "bdev_nvme_attach_controller" 00:26:38.033 },{ 00:26:38.033 "params": { 00:26:38.033 "name": "Nvme7", 00:26:38.033 "trtype": "tcp", 00:26:38.033 "traddr": "10.0.0.2", 00:26:38.033 "adrfam": "ipv4", 00:26:38.033 "trsvcid": "4420", 00:26:38.033 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:38.033 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:38.033 "hdgst": false, 00:26:38.033 "ddgst": false 00:26:38.033 }, 00:26:38.033 "method": "bdev_nvme_attach_controller" 00:26:38.033 },{ 00:26:38.033 "params": { 00:26:38.033 "name": "Nvme8", 00:26:38.033 "trtype": "tcp", 00:26:38.033 "traddr": "10.0.0.2", 00:26:38.033 "adrfam": "ipv4", 00:26:38.033 "trsvcid": "4420", 00:26:38.033 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:38.033 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:38.033 "hdgst": false, 00:26:38.033 "ddgst": false 00:26:38.033 }, 00:26:38.033 "method": "bdev_nvme_attach_controller" 00:26:38.033 },{ 00:26:38.033 "params": { 00:26:38.033 "name": "Nvme9", 00:26:38.033 "trtype": "tcp", 00:26:38.033 "traddr": "10.0.0.2", 00:26:38.033 "adrfam": "ipv4", 00:26:38.033 "trsvcid": "4420", 00:26:38.033 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:38.033 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:38.033 "hdgst": false, 00:26:38.033 "ddgst": false 00:26:38.033 }, 00:26:38.033 "method": "bdev_nvme_attach_controller" 00:26:38.033 },{ 00:26:38.033 "params": { 00:26:38.033 "name": "Nvme10", 00:26:38.033 "trtype": "tcp", 00:26:38.033 "traddr": "10.0.0.2", 00:26:38.033 "adrfam": "ipv4", 00:26:38.033 "trsvcid": "4420", 00:26:38.033 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:38.033 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:38.033 "hdgst": false, 00:26:38.033 "ddgst": false 00:26:38.033 }, 00:26:38.033 "method": "bdev_nvme_attach_controller" 00:26:38.033 }' 00:26:38.033 [2024-07-25 23:32:35.583307] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:38.033 [2024-07-25 23:32:35.583402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464932 ] 00:26:38.033 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.033 [2024-07-25 23:32:35.618967] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:38.033 [2024-07-25 23:32:35.649133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.033 [2024-07-25 23:32:35.737136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.933 Running I/O for 10 seconds... 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:26:40.192 23:32:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:40.495 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:40.495 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:40.495 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:40.495 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:40.495 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.495 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:40.495 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.495 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:26:40.495 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:26:40.495 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:40.767 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:40.767 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:40.767 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:40.767 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:40.767 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.767 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:40.767 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.767 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:26:40.767 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:26:40.767 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:26:40.768 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:26:40.768 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:26:40.768 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1464867 00:26:40.768 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1464867 ']' 00:26:40.768 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1464867 00:26:40.768 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:26:40.768 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:40.768 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1464867 00:26:40.768 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:40.768 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:40.768 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1464867' 00:26:40.768 killing process with pid 1464867 00:26:40.768 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1464867 00:26:40.768 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1464867 00:26:40.768 [2024-07-25 23:32:38.373836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.373970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.373987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.768 [2024-07-25 23:32:38.374689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.374704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.374716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.374728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.374740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.374752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104eaf0 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.376886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051610 is same with the state(5) to be set 00:26:40.769 [2024-07-25 23:32:38.379716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.379992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.380555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f470 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.381712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.381749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.381765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.381778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.381790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.770 [2024-07-25 23:32:38.381808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.381822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.381834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.381847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.381859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.381871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.381884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.381896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.381909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.381922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.381934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.381946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.381959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.381971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.381984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.381996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.382545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104f950 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.383411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.383437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.383451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.383463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.383475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.383487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.383499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.383511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.771 [2024-07-25 23:32:38.383524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.383990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.384002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.384014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.384027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.384050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.384071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.384085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.384098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.384110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.384123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.384135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.384147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.384160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.384172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.384184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.384196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.384208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fe10 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.385279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.385305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.385319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.385332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.385344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.385361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.385373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.385386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.385398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.385410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.385423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.385435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.772 [2024-07-25 23:32:38.385447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.385988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.386000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.386012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.386025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.386049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.386068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.386082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.386095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10502f0 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.387824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.387870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.387886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.387899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.773 [2024-07-25 23:32:38.387912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.387924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.387941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.387954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.387967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.387980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.387976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-25 23:32:38.387992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:40.774 the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-07-25 23:32:38.388021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-25 23:32:38.388035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:40.774 the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-25 23:32:38.388068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with [2024-07-25 23:32:38.388087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:26:40.774 id:0 cdw10:00000000 cdw11:00000000 00:26:40.774 [2024-07-25 23:32:38.388101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-07-25 23:32:38.388114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.774 [2024-07-25 23:32:38.388127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-07-25 23:32:38.388140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6920 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-25 23:32:38.388208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:40.774 the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-07-25 23:32:38.388236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.774 [2024-07-25 23:32:38.388248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-07-25 23:32:38.388261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.774 [2024-07-25 23:32:38.388274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-07-25 23:32:38.388287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-25 23:32:38.388299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:40.774 the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-25 23:32:38.388313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with [2024-07-25 23:32:38.388328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0b070 is same wthe state(5) to be set 00:26:40.774 ith the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.774 [2024-07-25 23:32:38.388400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-25 23:32:38.388412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.774 [2024-07-25 23:32:38.388450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-07-25 23:32:38.388463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.774 [2024-07-25 23:32:38.388476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-07-25 23:32:38.388489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.774 [2024-07-25 23:32:38.388501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.774 [2024-07-25 23:32:38.388512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.774 [2024-07-25 23:32:38.388514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74ce0 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.388578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with [2024-07-25 23:32:38.388591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:26:40.775 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.388606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.388619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.388635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.388648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.388661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.388674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.388686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7e300 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1050c90 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.388759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.388775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.388789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.388803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.388817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.388831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.388845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.388858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa51f10 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.388901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.388921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.388936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.388954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.388968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.388982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.388996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.389009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.389022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1dad0 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.389081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.389102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.389117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.389130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.389144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.389157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.389171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.389184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.389197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x547610 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.389243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.389262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.389277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.389291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.389305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.389319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.389333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.389346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.389362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbebcc0 is same with the state(5) to be set 00:26:40.775 [2024-07-25 23:32:38.389407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.389427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.775 [2024-07-25 23:32:38.389446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.775 [2024-07-25 23:32:38.389451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with [2024-07-25 23:32:38.389460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:26:40.775 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.776 [2024-07-25 23:32:38.389477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-25 23:32:38.389477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with id:0 cdw10:00000000 cdw11:00000000 00:26:40.776 the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-25 23:32:38.389493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.776 the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with [2024-07-25 23:32:38.389508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:26:40.776 id:0 cdw10:00000000 cdw11:00000000 00:26:40.776 [2024-07-25 23:32:38.389522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with [2024-07-25 23:32:38.389523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:26:40.776 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.776 [2024-07-25 23:32:38.389536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1bab0 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:1[2024-07-25 23:32:38.389946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.776 the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.776 [2024-07-25 23:32:38.389974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.389991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.776 [2024-07-25 23:32:38.390000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.390007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.776 [2024-07-25 23:32:38.390013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.390024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.776 [2024-07-25 23:32:38.390027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.390038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.776 [2024-07-25 23:32:38.390040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.390055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.390068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.776 [2024-07-25 23:32:38.390079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.390085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.776 [2024-07-25 23:32:38.390097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.390103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.776 [2024-07-25 23:32:38.390111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.390118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.776 [2024-07-25 23:32:38.390124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.390134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.776 [2024-07-25 23:32:38.390137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.390148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.776 [2024-07-25 23:32:38.390150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.390164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.390164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.776 [2024-07-25 23:32:38.390176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.390180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.776 [2024-07-25 23:32:38.390189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.390198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.776 [2024-07-25 23:32:38.390202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.776 [2024-07-25 23:32:38.390212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.777 [2024-07-25 23:32:38.390228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.777 [2024-07-25 23:32:38.390229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.777 [2024-07-25 23:32:38.390244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.777 [2024-07-25 23:32:38.390260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.777 [2024-07-25 23:32:38.390276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.777 [2024-07-25 23:32:38.390293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.777 [2024-07-25 23:32:38.390307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 23:32:38.390309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 the state(5) to be set 00:26:40.777 [2024-07-25 23:32:38.390323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.777 [2024-07-25 23:32:38.390325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.777 [2024-07-25 23:32:38.390339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.777 [2024-07-25 23:32:38.390360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.777 [2024-07-25 23:32:38.390364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.777 [2024-07-25 23:32:38.390378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.777 [2024-07-25 23:32:38.390395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051150 is same with the state(5) to be set 00:26:40.777 [2024-07-25 23:32:38.390409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.390984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.777 [2024-07-25 23:32:38.390998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.777 [2024-07-25 23:32:38.391014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.778 [2024-07-25 23:32:38.391835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.778 [2024-07-25 23:32:38.391851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.391864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.391880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.391893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.391909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.391922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.391938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.391951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.391993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:40.779 [2024-07-25 23:32:38.392089] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbdf950 was disconnected and freed. reset controller. 00:26:40.779 [2024-07-25 23:32:38.394464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.394490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.394512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.394528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.394545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.394564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.394581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.394595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.394611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.394624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.394640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.394654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.394670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.394683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.394699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.394713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.394730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.394744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.394760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.394773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.394789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.394803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.394819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.394832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.394848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.394862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.394878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.394892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.394908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.394921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.394941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.394956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.394972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.394986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.395002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.395016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.395032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.395054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.395079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.395094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.395110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.395124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.395140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.395153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.395169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.395183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.395199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.395213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.395229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.395243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.395259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.395272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.395288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.395302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.779 [2024-07-25 23:32:38.395319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.779 [2024-07-25 23:32:38.395336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.395977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.395993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.396006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.396022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.396036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.396055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.396076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.396093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.396112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.396129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.396143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.396158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.396172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.396188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.396202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.396217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.396231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.396247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.396262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.396278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.396292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.396307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-07-25 23:32:38.396321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.780 [2024-07-25 23:32:38.396336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-07-25 23:32:38.396350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.781 [2024-07-25 23:32:38.396366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-07-25 23:32:38.396380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.781 [2024-07-25 23:32:38.396396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-07-25 23:32:38.396410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.781 [2024-07-25 23:32:38.396425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-07-25 23:32:38.396439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.781 [2024-07-25 23:32:38.396519] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x137f3d0 was disconnected and freed. reset controller. 00:26:40.781 [2024-07-25 23:32:38.396723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:40.781 [2024-07-25 23:32:38.396769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1dad0 (9): Bad file descriptor 00:26:40.781 [2024-07-25 23:32:38.398368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:40.781 [2024-07-25 23:32:38.398402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbebcc0 (9): Bad file descriptor 00:26:40.781 [2024-07-25 23:32:38.398485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.781 [2024-07-25 23:32:38.398507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.781 [2024-07-25 23:32:38.398524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.781 [2024-07-25 23:32:38.398537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.781 [2024-07-25 23:32:38.398551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.781 [2024-07-25 23:32:38.398566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.781 [2024-07-25 23:32:38.398581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.781 [2024-07-25 23:32:38.398594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.781 [2024-07-25 23:32:38.398608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbab180 is same with the state(5) to be set 00:26:40.781 [2024-07-25 23:32:38.398638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf6920 (9): Bad file descriptor 00:26:40.781 [2024-07-25 23:32:38.398672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0b070 (9): Bad file descriptor 00:26:40.781 [2024-07-25 23:32:38.398713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa74ce0 (9): Bad file descriptor 00:26:40.781 [2024-07-25 23:32:38.398744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7e300 (9): Bad file descriptor 00:26:40.781 [2024-07-25 23:32:38.398773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa51f10 (9): Bad file descriptor 00:26:40.781 [2024-07-25 23:32:38.398814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x547610 (9): Bad file descriptor 00:26:40.781 [2024-07-25 23:32:38.398848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1bab0 (9): Bad file descriptor 00:26:40.781 [2024-07-25 23:32:38.399695] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:40.781 [2024-07-25 23:32:38.399878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.781 [2024-07-25 23:32:38.399907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc1dad0 with addr=10.0.0.2, port=4420 00:26:40.781 [2024-07-25 23:32:38.399924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1dad0 is same with the state(5) to be set 00:26:40.781 [2024-07-25 23:32:38.399997] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:40.781 [2024-07-25 23:32:38.400101] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:40.781 [2024-07-25 23:32:38.400169] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:40.781 [2024-07-25 23:32:38.400236] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:40.781 [2024-07-25 23:32:38.400302] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:40.781 [2024-07-25 23:32:38.400631] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:40.781 [2024-07-25 23:32:38.400700] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:40.781 [2024-07-25 23:32:38.400841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.781 [2024-07-25 23:32:38.400867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcc0 with addr=10.0.0.2, port=4420 00:26:40.781 [2024-07-25 23:32:38.400884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbebcc0 is same with the state(5) to be set 00:26:40.781 [2024-07-25 23:32:38.400903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1dad0 (9): Bad file descriptor 00:26:40.781 [2024-07-25 23:32:38.401040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbebcc0 (9): Bad file descriptor 00:26:40.781 [2024-07-25 23:32:38.401074] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:40.781 [2024-07-25 23:32:38.401090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:40.781 [2024-07-25 23:32:38.401108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:40.781 [2024-07-25 23:32:38.401182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.781 [2024-07-25 23:32:38.401203] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:40.781 [2024-07-25 23:32:38.401217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:40.781 [2024-07-25 23:32:38.401230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:40.781 [2024-07-25 23:32:38.401290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.781 [2024-07-25 23:32:38.408444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbab180 (9): Bad file descriptor 00:26:40.781 [2024-07-25 23:32:38.408732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-07-25 23:32:38.408761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.781 [2024-07-25 23:32:38.408794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-07-25 23:32:38.408810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.781 [2024-07-25 23:32:38.408827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-07-25 23:32:38.408842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.781 [2024-07-25 23:32:38.408859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-07-25 23:32:38.408873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.781 [2024-07-25 23:32:38.408890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-07-25 23:32:38.408904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.781 [2024-07-25 23:32:38.408920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-07-25 23:32:38.408934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.781 [2024-07-25 23:32:38.408951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-07-25 23:32:38.408976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.781 [2024-07-25 23:32:38.408994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-07-25 23:32:38.409008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.781 [2024-07-25 23:32:38.409025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.409972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.409988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.410002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.782 [2024-07-25 23:32:38.410018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-07-25 23:32:38.410032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.410742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.410757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbde6a0 is same with the state(5) to be set 00:26:40.783 [2024-07-25 23:32:38.412070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.412093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.412115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.412131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.412147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.412161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.412176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.412190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.412206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.412220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.412241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.412255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.412271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.412285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.412300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.412314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.783 [2024-07-25 23:32:38.412330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.783 [2024-07-25 23:32:38.412344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.412964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.412977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.413001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.413017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.413033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.413046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.413068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.413084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.413100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.413114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.413130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.413145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.413160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.413174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.784 [2024-07-25 23:32:38.413190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-07-25 23:32:38.413203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.413972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.413986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.414002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.414015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.414030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe0de0 is same with the state(5) to be set 00:26:40.785 [2024-07-25 23:32:38.415278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.415302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.415323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.415338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.415361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.415374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.415395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.785 [2024-07-25 23:32:38.415409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.785 [2024-07-25 23:32:38.415426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.415983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.415999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.416013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.416028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.416043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.416064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.416080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.416096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.416110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.416126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.416140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.416160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.416175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.416191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.416205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.416221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.416235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.416251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.416265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.416281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.416295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.416312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.416326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.416341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.416355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.416371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.416386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-07-25 23:32:38.416403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.786 [2024-07-25 23:32:38.416416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.416979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.416992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.417008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.417022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.417038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.417052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.417074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.417089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.417105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.417119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.417135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.417149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.417164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.417178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.417194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.417207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.417223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.417237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.417252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4b430 is same with the state(5) to be set 00:26:40.787 [2024-07-25 23:32:38.418480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.418504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.418525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.418545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.418562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.418576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.418592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.418605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.787 [2024-07-25 23:32:38.418622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.787 [2024-07-25 23:32:38.418635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.418651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.418665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.418681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.418695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.418711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.418724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.418740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.418754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.418771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.418785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.418801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.418815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.418831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.418844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.418861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.418875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.418890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.418904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.418923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.418938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.418954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.418969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.418985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.418999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.788 [2024-07-25 23:32:38.419626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.788 [2024-07-25 23:32:38.419642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.419655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.419672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.419686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.419705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.419719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.419735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.419749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.419765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.419779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.419795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.419809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.419825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.419838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.419854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.419868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.419884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.419898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.419914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.419928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.419944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.419958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.419974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.419989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.420005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.420019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.420035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.420049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.420073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.420093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.420110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.420124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.420140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.420154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.420170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.420184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.420200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.420214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.420230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.420244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.420259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.420273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.420289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.420303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.420318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.420332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.420348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.420362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.420378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.420392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.420408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.420422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.420438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.420453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.789 [2024-07-25 23:32:38.420471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4c770 is same with the state(5) to be set 00:26:40.789 [2024-07-25 23:32:38.421702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.789 [2024-07-25 23:32:38.421725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.421747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.421762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.421779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.421793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.421809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.421823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.421839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.421853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.421869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.421882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.421898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.421912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.421927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.421941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.421958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.421972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.421988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.790 [2024-07-25 23:32:38.422708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.790 [2024-07-25 23:32:38.422724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.422738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.422754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.422768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.422784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.422797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.422814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.422827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.422847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.422861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.422877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.422891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.422907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.422921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.422937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.422950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.422966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.422980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.422996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.423647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.423661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dc00 is same with the state(5) to be set 00:26:40.791 [2024-07-25 23:32:38.424907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.791 [2024-07-25 23:32:38.424930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.791 [2024-07-25 23:32:38.424952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.424967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.424983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.424997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.792 [2024-07-25 23:32:38.425923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.792 [2024-07-25 23:32:38.425937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.425952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.425966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.425982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.426842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.426856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1526e10 is same with the state(5) to be set 00:26:40.793 [2024-07-25 23:32:38.428122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.428146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.428168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.793 [2024-07-25 23:32:38.428183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.793 [2024-07-25 23:32:38.428199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.428977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.428991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.429007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.429021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.429037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.429051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.429075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.429090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.429106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.429119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.429135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.794 [2024-07-25 23:32:38.429152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.794 [2024-07-25 23:32:38.429169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.429974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.795 [2024-07-25 23:32:38.429988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.795 [2024-07-25 23:32:38.430004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.796 [2024-07-25 23:32:38.430017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.796 [2024-07-25 23:32:38.430033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.796 [2024-07-25 23:32:38.430046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.796 [2024-07-25 23:32:38.430065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd7ff0 is same with the state(5) to be set 00:26:40.796 [2024-07-25 23:32:38.432114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.796 [2024-07-25 23:32:38.432148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:40.796 [2024-07-25 23:32:38.432167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:40.796 [2024-07-25 23:32:38.432271] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.796 [2024-07-25 23:32:38.432299] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.796 [2024-07-25 23:32:38.432327] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.796 [2024-07-25 23:32:38.432346] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.796 [2024-07-25 23:32:38.432366] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.796 [2024-07-25 23:32:38.432481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:40.796 [2024-07-25 23:32:38.432506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:40.796 [2024-07-25 23:32:38.432523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:40.796 [2024-07-25 23:32:38.432539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:40.796 [2024-07-25 23:32:38.432555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:40.796 [2024-07-25 23:32:38.432799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.796 [2024-07-25 23:32:38.432828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa51f10 with addr=10.0.0.2, port=4420 00:26:40.796 [2024-07-25 23:32:38.432845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa51f10 is same with the state(5) to be set 00:26:40.796 [2024-07-25 23:32:38.432973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.796 [2024-07-25 23:32:38.432998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa7e300 with addr=10.0.0.2, port=4420 00:26:40.796 [2024-07-25 23:32:38.433014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7e300 is same with the state(5) to be set 00:26:40.796 [2024-07-25 23:32:38.433123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.796 [2024-07-25 23:32:38.433149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa74ce0 with addr=10.0.0.2, port=4420 00:26:40.796 [2024-07-25 23:32:38.433164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74ce0 is same with the state(5) to be set 00:26:40.796 [2024-07-25 23:32:38.435071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.796 [2024-07-25 23:32:38.435100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0b070 with addr=10.0.0.2, port=4420 00:26:40.796 [2024-07-25 23:32:38.435116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0b070 is same with the state(5) to be set 00:26:40.796 [2024-07-25 23:32:38.435219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.796 [2024-07-25 23:32:38.435243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x547610 with addr=10.0.0.2, port=4420 00:26:40.796 [2024-07-25 23:32:38.435259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x547610 is same with the state(5) to be set 00:26:40.796 [2024-07-25 23:32:38.435354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.796 [2024-07-25 23:32:38.435380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc1bab0 with addr=10.0.0.2, port=4420 00:26:40.796 [2024-07-25 23:32:38.435396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1bab0 is same with the state(5) to be set 00:26:40.796 [2024-07-25 23:32:38.435498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.796 [2024-07-25 23:32:38.435523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf6920 with addr=10.0.0.2, port=4420 00:26:40.796 [2024-07-25 23:32:38.435539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf6920 is same with the state(5) to be set 00:26:40.796 [2024-07-25 23:32:38.435629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.796 [2024-07-25 23:32:38.435654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc1dad0 with addr=10.0.0.2, port=4420 00:26:40.796 [2024-07-25 23:32:38.435670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1dad0 is same with the state(5) to be set 00:26:40.796 [2024-07-25 23:32:38.435695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa51f10 (9): Bad file descriptor 00:26:40.796 [2024-07-25 23:32:38.435715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7e300 (9): Bad file descriptor 00:26:40.796 [2024-07-25 23:32:38.435732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa74ce0 (9): Bad file descriptor 00:26:40.796 [2024-07-25 23:32:38.435852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.796 [2024-07-25 23:32:38.435876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.796 [2024-07-25 23:32:38.435903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.796 [2024-07-25 23:32:38.435919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.796 [2024-07-25 23:32:38.435935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.796 [2024-07-25 23:32:38.435955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.796 [2024-07-25 23:32:38.435972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.796 [2024-07-25 23:32:38.435986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.796 [2024-07-25 23:32:38.436002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.796 [2024-07-25 23:32:38.436016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.796 [2024-07-25 23:32:38.436032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.796 [2024-07-25 23:32:38.436045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.796 [2024-07-25 23:32:38.436067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.796 [2024-07-25 23:32:38.436083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.796 [2024-07-25 23:32:38.436099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.796 [2024-07-25 23:32:38.436113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.796 [2024-07-25 23:32:38.436129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.796 [2024-07-25 23:32:38.436143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.796 [2024-07-25 23:32:38.436158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.796 [2024-07-25 23:32:38.436172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.796 [2024-07-25 23:32:38.436188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.796 [2024-07-25 23:32:38.436201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.796 [2024-07-25 23:32:38.436217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.796 [2024-07-25 23:32:38.436230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.796 [2024-07-25 23:32:38.436246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.796 [2024-07-25 23:32:38.436260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.796 [2024-07-25 23:32:38.436276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.436980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.436996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.437010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.437026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.437040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.437056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.437081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.437102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.437116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.437132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.437146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.437163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.437177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.437193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.437207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.437223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.437237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.797 [2024-07-25 23:32:38.437253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.797 [2024-07-25 23:32:38.437267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.798 [2024-07-25 23:32:38.437808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.798 [2024-07-25 23:32:38.437823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6b10 is same with the state(5) to be set 00:26:40.798 [2024-07-25 23:32:38.439941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:40.798 task offset: 26496 on job bdev=Nvme2n1 fails 00:26:40.798 00:26:40.798 Latency(us) 00:26:40.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.798 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.798 Job: Nvme1n1 ended in about 0.92 seconds with error 00:26:40.798 Verification LBA range: start 0x0 length 0x400 00:26:40.798 Nvme1n1 : 0.92 139.87 8.74 69.93 0.00 301758.39 28156.21 250104.79 00:26:40.798 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.798 Job: Nvme2n1 ended in about 0.90 seconds with error 00:26:40.798 Verification LBA range: start 0x0 length 0x400 00:26:40.798 Nvme2n1 : 0.90 213.93 13.37 71.31 0.00 217217.28 3519.53 246997.90 00:26:40.798 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.798 Job: Nvme3n1 ended in about 0.92 seconds with error 00:26:40.798 Verification LBA range: start 0x0 length 0x400 00:26:40.798 Nvme3n1 : 0.92 209.06 13.07 69.69 0.00 217885.01 17670.45 246997.90 00:26:40.798 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.798 Job: Nvme4n1 ended in about 0.92 seconds with error 00:26:40.798 Verification LBA range: start 0x0 length 0x400 00:26:40.798 Nvme4n1 : 0.92 208.33 13.02 69.44 0.00 214088.63 17961.72 251658.24 00:26:40.798 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.798 Job: Nvme5n1 ended in about 0.92 seconds with error 00:26:40.798 Verification LBA range: start 0x0 length 0x400 00:26:40.798 Nvme5n1 : 0.92 138.41 8.65 69.20 0.00 280422.27 22816.24 260978.92 00:26:40.798 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.798 Job: Nvme6n1 ended in about 0.93 seconds with error 00:26:40.798 Verification LBA range: start 0x0 length 0x400 00:26:40.798 Nvme6n1 : 0.93 137.93 8.62 68.97 0.00 275363.46 25437.68 248551.35 00:26:40.798 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.798 Job: Nvme7n1 ended in about 0.90 seconds with error 00:26:40.798 Verification LBA range: start 0x0 length 0x400 00:26:40.798 Nvme7n1 : 0.90 212.99 13.31 71.00 0.00 195337.91 4563.25 254765.13 00:26:40.798 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.798 Job: Nvme8n1 ended in about 0.93 seconds with error 00:26:40.798 Verification LBA range: start 0x0 length 0x400 00:26:40.798 Nvme8n1 : 0.93 141.75 8.86 68.73 0.00 259044.79 18544.26 250104.79 00:26:40.798 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.798 Job: Nvme9n1 ended in about 0.94 seconds with error 00:26:40.798 Verification LBA range: start 0x0 length 0x400 00:26:40.798 Nvme9n1 : 0.94 135.86 8.49 67.93 0.00 262305.82 35535.08 292047.83 00:26:40.798 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.798 Job: Nvme10n1 ended in about 0.93 seconds with error 00:26:40.798 Verification LBA range: start 0x0 length 0x400 00:26:40.798 Nvme10n1 : 0.93 136.99 8.56 68.50 0.00 253898.65 18544.26 254765.13 00:26:40.798 =================================================================================================================== 00:26:40.798 Total : 1675.13 104.70 694.70 0.00 243454.99 3519.53 292047.83 00:26:40.798 [2024-07-25 23:32:38.467211] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:40.798 [2024-07-25 23:32:38.467290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:40.799 [2024-07-25 23:32:38.467387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0b070 (9): Bad file descriptor 00:26:40.799 [2024-07-25 23:32:38.467418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x547610 (9): Bad file descriptor 00:26:40.799 [2024-07-25 23:32:38.467437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1bab0 (9): Bad file descriptor 00:26:40.799 [2024-07-25 23:32:38.467456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf6920 (9): Bad file descriptor 00:26:40.799 [2024-07-25 23:32:38.467487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1dad0 (9): Bad file descriptor 00:26:40.799 [2024-07-25 23:32:38.467505] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.799 [2024-07-25 23:32:38.467519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.799 [2024-07-25 23:32:38.467535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.799 [2024-07-25 23:32:38.467561] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:40.799 [2024-07-25 23:32:38.467575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:40.799 [2024-07-25 23:32:38.467588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:40.799 [2024-07-25 23:32:38.467605] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:40.799 [2024-07-25 23:32:38.467619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:40.799 [2024-07-25 23:32:38.467632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:40.799 [2024-07-25 23:32:38.467817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.799 [2024-07-25 23:32:38.467840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.799 [2024-07-25 23:32:38.467853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.799 [2024-07-25 23:32:38.468091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-07-25 23:32:38.468127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebcc0 with addr=10.0.0.2, port=4420 00:26:40.799 [2024-07-25 23:32:38.468147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbebcc0 is same with the state(5) to be set 00:26:40.799 [2024-07-25 23:32:38.468276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-07-25 23:32:38.468303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbab180 with addr=10.0.0.2, port=4420 00:26:40.799 [2024-07-25 23:32:38.468318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbab180 is same with the state(5) to be set 00:26:40.799 [2024-07-25 23:32:38.468333] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:40.799 [2024-07-25 23:32:38.468345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:40.799 [2024-07-25 23:32:38.468358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:40.799 [2024-07-25 23:32:38.468377] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:40.799 [2024-07-25 23:32:38.468391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:40.799 [2024-07-25 23:32:38.468403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:40.799 [2024-07-25 23:32:38.468419] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:40.799 [2024-07-25 23:32:38.468432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:40.799 [2024-07-25 23:32:38.468445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:40.799 [2024-07-25 23:32:38.468461] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:40.799 [2024-07-25 23:32:38.468474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:40.799 [2024-07-25 23:32:38.468492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:40.799 [2024-07-25 23:32:38.468510] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:40.799 [2024-07-25 23:32:38.468523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:40.799 [2024-07-25 23:32:38.468536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:40.799 [2024-07-25 23:32:38.468588] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.799 [2024-07-25 23:32:38.468612] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.799 [2024-07-25 23:32:38.468632] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.799 [2024-07-25 23:32:38.468651] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.799 [2024-07-25 23:32:38.468670] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.799 [2024-07-25 23:32:38.468993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.799 [2024-07-25 23:32:38.469016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.799 [2024-07-25 23:32:38.469029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.799 [2024-07-25 23:32:38.469041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.799 [2024-07-25 23:32:38.469052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.799 [2024-07-25 23:32:38.469100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbebcc0 (9): Bad file descriptor 00:26:40.799 [2024-07-25 23:32:38.469123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbab180 (9): Bad file descriptor 00:26:40.799 [2024-07-25 23:32:38.469183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:40.799 [2024-07-25 23:32:38.469207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:40.799 [2024-07-25 23:32:38.469238] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:40.799 [2024-07-25 23:32:38.469254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:40.799 [2024-07-25 23:32:38.469268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:40.799 [2024-07-25 23:32:38.469284] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:40.799 [2024-07-25 23:32:38.469298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:40.799 [2024-07-25 23:32:38.469310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:40.799 [2024-07-25 23:32:38.469349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:40.799 [2024-07-25 23:32:38.469380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.799 [2024-07-25 23:32:38.469396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.799 [2024-07-25 23:32:38.469513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-07-25 23:32:38.469539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa74ce0 with addr=10.0.0.2, port=4420 00:26:40.799 [2024-07-25 23:32:38.469556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74ce0 is same with the state(5) to be set 00:26:40.799 [2024-07-25 23:32:38.469663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-07-25 23:32:38.469693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa7e300 with addr=10.0.0.2, port=4420 00:26:40.799 [2024-07-25 23:32:38.469710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7e300 is same with the state(5) to be set 00:26:40.799 [2024-07-25 23:32:38.469838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-07-25 23:32:38.469865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa51f10 with addr=10.0.0.2, port=4420 00:26:40.799 [2024-07-25 23:32:38.469880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa51f10 is same with the state(5) to be set 00:26:40.799 [2024-07-25 23:32:38.469899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa74ce0 (9): Bad file descriptor 00:26:40.799 [2024-07-25 23:32:38.469918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7e300 (9): Bad file descriptor 00:26:40.799 [2024-07-25 23:32:38.469964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa51f10 (9): Bad file descriptor 00:26:40.799 [2024-07-25 23:32:38.469986] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:40.799 [2024-07-25 23:32:38.469999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:40.800 [2024-07-25 23:32:38.470012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:40.800 [2024-07-25 23:32:38.470029] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:40.800 [2024-07-25 23:32:38.470042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:40.800 [2024-07-25 23:32:38.470055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:40.800 [2024-07-25 23:32:38.470104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.800 [2024-07-25 23:32:38.470122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.800 [2024-07-25 23:32:38.470135] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:40.800 [2024-07-25 23:32:38.470148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:40.800 [2024-07-25 23:32:38.470160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:40.800 [2024-07-25 23:32:38.470195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.365 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:26:41.365 23:32:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1464932 00:26:42.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1464932) - No such process 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:42.302 rmmod nvme_tcp 00:26:42.302 rmmod nvme_fabrics 00:26:42.302 rmmod nvme_keyring 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.302 23:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.834 23:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:44.834 00:26:44.834 real 0m7.514s 00:26:44.834 user 0m18.630s 00:26:44.834 sys 0m1.451s 00:26:44.834 23:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:44.834 23:32:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:44.834 ************************************ 00:26:44.834 END TEST nvmf_shutdown_tc3 00:26:44.834 ************************************ 00:26:44.834 23:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:26:44.834 00:26:44.834 real 0m27.220s 00:26:44.834 user 1m15.458s 00:26:44.834 sys 0m6.333s 00:26:44.834 23:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:44.834 23:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:44.834 ************************************ 00:26:44.834 END TEST nvmf_shutdown 00:26:44.834 ************************************ 00:26:44.834 23:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:26:44.834 00:26:44.834 real 16m46.929s 00:26:44.834 user 47m22.367s 00:26:44.834 sys 3m45.449s 00:26:44.834 23:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:44.834 23:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:44.834 ************************************ 00:26:44.834 END TEST nvmf_target_extra 00:26:44.834 ************************************ 00:26:44.834 23:32:42 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:44.834 23:32:42 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:44.834 23:32:42 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:44.834 23:32:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:44.834 ************************************ 00:26:44.834 START TEST nvmf_host 00:26:44.834 ************************************ 00:26:44.834 23:32:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:44.834 * Looking for test storage... 00:26:44.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:44.834 23:32:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.834 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.835 ************************************ 00:26:44.835 START TEST nvmf_multicontroller 00:26:44.835 ************************************ 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:44.835 * Looking for test storage... 00:26:44.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.835 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:26:44.836 23:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.737 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:46.737 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:26:46.737 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:46.737 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:46.737 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:46.738 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:46.738 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:46.738 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:46.738 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:46.738 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:46.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:46.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:26:46.739 00:26:46.739 --- 10.0.0.2 ping statistics --- 00:26:46.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.739 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:46.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:46.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:26:46.739 00:26:46.739 --- 10.0.0.1 ping statistics --- 00:26:46.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.739 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1467484 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1467484 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1467484 ']' 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:46.739 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.739 [2024-07-25 23:32:44.360683] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:46.739 [2024-07-25 23:32:44.360774] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.739 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.739 [2024-07-25 23:32:44.401067] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:46.739 [2024-07-25 23:32:44.427249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:46.997 [2024-07-25 23:32:44.517792] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.997 [2024-07-25 23:32:44.517853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.997 [2024-07-25 23:32:44.517869] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:46.997 [2024-07-25 23:32:44.517883] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:46.997 [2024-07-25 23:32:44.517895] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.997 [2024-07-25 23:32:44.517985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:46.997 [2024-07-25 23:32:44.518102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:46.997 [2024-07-25 23:32:44.518106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.997 [2024-07-25 23:32:44.647649] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.997 Malloc0 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.997 [2024-07-25 23:32:44.708440] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.997 [2024-07-25 23:32:44.716313] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.997 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.255 Malloc1 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1467566 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1467566 /var/tmp/bdevperf.sock 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1467566 ']' 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:47.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:47.255 23:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.513 NVMe0n1 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.513 1 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.513 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.513 request: 00:26:47.513 { 00:26:47.513 "name": "NVMe0", 00:26:47.513 "trtype": "tcp", 00:26:47.513 "traddr": "10.0.0.2", 00:26:47.513 "adrfam": "ipv4", 00:26:47.513 "trsvcid": "4420", 00:26:47.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:47.514 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:47.514 "hostaddr": "10.0.0.2", 00:26:47.514 "hostsvcid": "60000", 00:26:47.514 "prchk_reftag": false, 00:26:47.514 "prchk_guard": false, 00:26:47.514 "hdgst": false, 00:26:47.514 "ddgst": false, 00:26:47.514 "method": "bdev_nvme_attach_controller", 00:26:47.514 "req_id": 1 00:26:47.514 } 00:26:47.514 Got JSON-RPC error response 00:26:47.514 response: 00:26:47.514 { 00:26:47.514 "code": -114, 00:26:47.514 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:47.514 } 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.514 request: 00:26:47.514 { 00:26:47.514 "name": "NVMe0", 00:26:47.514 "trtype": "tcp", 00:26:47.514 "traddr": "10.0.0.2", 00:26:47.514 "adrfam": "ipv4", 00:26:47.514 "trsvcid": "4420", 00:26:47.514 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:47.514 "hostaddr": "10.0.0.2", 00:26:47.514 "hostsvcid": "60000", 00:26:47.514 "prchk_reftag": false, 00:26:47.514 "prchk_guard": false, 00:26:47.514 "hdgst": false, 00:26:47.514 "ddgst": false, 00:26:47.514 "method": "bdev_nvme_attach_controller", 00:26:47.514 "req_id": 1 00:26:47.514 } 00:26:47.514 Got JSON-RPC error response 00:26:47.514 response: 00:26:47.514 { 00:26:47.514 "code": -114, 00:26:47.514 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:47.514 } 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.514 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.772 request: 00:26:47.772 { 00:26:47.772 "name": "NVMe0", 00:26:47.772 "trtype": "tcp", 00:26:47.772 "traddr": "10.0.0.2", 00:26:47.772 "adrfam": "ipv4", 00:26:47.772 "trsvcid": "4420", 00:26:47.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:47.772 "hostaddr": "10.0.0.2", 00:26:47.772 "hostsvcid": "60000", 00:26:47.772 "prchk_reftag": false, 00:26:47.772 "prchk_guard": false, 00:26:47.772 "hdgst": false, 00:26:47.772 "ddgst": false, 00:26:47.772 "multipath": "disable", 00:26:47.772 "method": "bdev_nvme_attach_controller", 00:26:47.772 "req_id": 1 00:26:47.772 } 00:26:47.772 Got JSON-RPC error response 00:26:47.772 response: 00:26:47.772 { 00:26:47.772 "code": -114, 00:26:47.772 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:26:47.772 } 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.772 request: 00:26:47.772 { 00:26:47.772 "name": "NVMe0", 00:26:47.772 "trtype": "tcp", 00:26:47.772 "traddr": "10.0.0.2", 00:26:47.772 "adrfam": "ipv4", 00:26:47.772 "trsvcid": "4420", 00:26:47.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:47.772 "hostaddr": "10.0.0.2", 00:26:47.772 "hostsvcid": "60000", 00:26:47.772 "prchk_reftag": false, 00:26:47.772 "prchk_guard": false, 00:26:47.772 "hdgst": false, 00:26:47.772 "ddgst": false, 00:26:47.772 "multipath": "failover", 00:26:47.772 "method": "bdev_nvme_attach_controller", 00:26:47.772 "req_id": 1 00:26:47.772 } 00:26:47.772 Got JSON-RPC error response 00:26:47.772 response: 00:26:47.772 { 00:26:47.772 "code": -114, 00:26:47.772 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:47.772 } 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.772 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.772 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:48.031 00:26:48.031 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.031 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:48.031 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:48.031 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.031 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:48.031 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.031 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:48.031 23:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:49.404 0 00:26:49.404 23:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:49.404 23:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.404 23:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:49.404 23:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.404 23:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1467566 00:26:49.404 23:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1467566 ']' 00:26:49.404 23:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1467566 00:26:49.404 23:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:26:49.404 23:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:49.404 23:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1467566 00:26:49.404 23:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:49.404 23:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:49.404 23:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1467566' 00:26:49.404 killing process with pid 1467566 00:26:49.404 23:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1467566 00:26:49.404 23:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1467566 00:26:49.404 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:49.404 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.404 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:49.404 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.404 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:49.404 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.404 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:49.404 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.404 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:49.404 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:49.404 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:26:49.404 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:49.404 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:26:49.404 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:26:49.404 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:49.404 [2024-07-25 23:32:44.818638] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:49.404 [2024-07-25 23:32:44.818725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467566 ] 00:26:49.404 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.404 [2024-07-25 23:32:44.854124] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:49.405 [2024-07-25 23:32:44.883240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.405 [2024-07-25 23:32:44.969012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.405 [2024-07-25 23:32:45.639454] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name b03ffb60-f3da-4a50-8e4f-be5219050534 already exists 00:26:49.405 [2024-07-25 23:32:45.639494] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:b03ffb60-f3da-4a50-8e4f-be5219050534 alias for bdev NVMe1n1 00:26:49.405 [2024-07-25 23:32:45.639524] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:49.405 Running I/O for 1 seconds... 00:26:49.405 00:26:49.405 Latency(us) 00:26:49.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.405 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:49.405 NVMe0n1 : 1.01 19331.82 75.51 0.00 0.00 6610.89 2135.99 11796.48 00:26:49.405 =================================================================================================================== 00:26:49.405 Total : 19331.82 75.51 0.00 0.00 6610.89 2135.99 11796.48 00:26:49.405 Received shutdown signal, test time was about 1.000000 seconds 00:26:49.405 00:26:49.405 Latency(us) 00:26:49.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.405 =================================================================================================================== 00:26:49.405 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:49.405 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:49.405 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:49.405 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:26:49.405 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:49.405 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:49.405 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:26:49.405 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:49.405 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:26:49.405 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:49.405 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:49.405 rmmod nvme_tcp 00:26:49.405 rmmod nvme_fabrics 00:26:49.405 rmmod nvme_keyring 00:26:49.405 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:49.405 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:26:49.664 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:26:49.664 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1467484 ']' 00:26:49.664 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1467484 00:26:49.664 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1467484 ']' 00:26:49.664 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1467484 00:26:49.664 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:26:49.664 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:49.664 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1467484 00:26:49.664 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:49.664 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:49.664 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1467484' 00:26:49.664 killing process with pid 1467484 00:26:49.664 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1467484 00:26:49.664 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1467484 00:26:49.924 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:49.924 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:49.924 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:49.924 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:49.924 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:49.924 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.925 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.925 23:32:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.827 23:32:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:51.827 00:26:51.827 real 0m7.338s 00:26:51.827 user 0m11.569s 00:26:51.827 sys 0m2.258s 00:26:51.827 23:32:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:51.827 23:32:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:51.827 ************************************ 00:26:51.827 END TEST nvmf_multicontroller 00:26:51.827 ************************************ 00:26:51.827 23:32:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:51.827 23:32:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:51.827 23:32:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:51.827 23:32:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.827 ************************************ 00:26:51.827 START TEST nvmf_aer 00:26:51.827 ************************************ 00:26:51.827 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:52.085 * Looking for test storage... 00:26:52.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:26:52.085 23:32:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:53.985 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:53.985 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.985 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:53.985 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:53.986 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:53.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:26:53.986 00:26:53.986 --- 10.0.0.2 ping statistics --- 00:26:53.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.986 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:26:53.986 00:26:53.986 --- 10.0.0.1 ping statistics --- 00:26:53.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.986 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1469763 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1469763 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1469763 ']' 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:53.986 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:53.986 [2024-07-25 23:32:51.699648] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:53.986 [2024-07-25 23:32:51.699733] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.244 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.244 [2024-07-25 23:32:51.738086] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:54.244 [2024-07-25 23:32:51.770151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:54.244 [2024-07-25 23:32:51.861206] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.244 [2024-07-25 23:32:51.861268] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.244 [2024-07-25 23:32:51.861284] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.244 [2024-07-25 23:32:51.861298] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.244 [2024-07-25 23:32:51.861309] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.244 [2024-07-25 23:32:51.861391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.244 [2024-07-25 23:32:51.861462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:54.244 [2024-07-25 23:32:51.861561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:54.244 [2024-07-25 23:32:51.861564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.502 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:54.502 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:26:54.502 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:54.502 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:54.502 23:32:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.502 [2024-07-25 23:32:52.009217] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.502 Malloc0 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.502 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.503 [2024-07-25 23:32:52.060332] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.503 [ 00:26:54.503 { 00:26:54.503 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:54.503 "subtype": "Discovery", 00:26:54.503 "listen_addresses": [], 00:26:54.503 "allow_any_host": true, 00:26:54.503 "hosts": [] 00:26:54.503 }, 00:26:54.503 { 00:26:54.503 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:54.503 "subtype": "NVMe", 00:26:54.503 "listen_addresses": [ 00:26:54.503 { 00:26:54.503 "trtype": "TCP", 00:26:54.503 "adrfam": "IPv4", 00:26:54.503 "traddr": "10.0.0.2", 00:26:54.503 "trsvcid": "4420" 00:26:54.503 } 00:26:54.503 ], 00:26:54.503 "allow_any_host": true, 00:26:54.503 "hosts": [], 00:26:54.503 "serial_number": "SPDK00000000000001", 00:26:54.503 "model_number": "SPDK bdev Controller", 00:26:54.503 "max_namespaces": 2, 00:26:54.503 "min_cntlid": 1, 00:26:54.503 "max_cntlid": 65519, 00:26:54.503 "namespaces": [ 00:26:54.503 { 00:26:54.503 "nsid": 1, 00:26:54.503 "bdev_name": "Malloc0", 00:26:54.503 "name": "Malloc0", 00:26:54.503 "nguid": "8E261234A3824A2CAB852F732146ED2E", 00:26:54.503 "uuid": "8e261234-a382-4a2c-ab85-2f732146ed2e" 00:26:54.503 } 00:26:54.503 ] 00:26:54.503 } 00:26:54.503 ] 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1469856 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:54.503 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:26:54.503 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.761 Malloc1 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:54.761 Asynchronous Event Request test 00:26:54.761 Attaching to 10.0.0.2 00:26:54.761 Attached to 10.0.0.2 00:26:54.761 Registering asynchronous event callbacks... 00:26:54.761 Starting namespace attribute notice tests for all controllers... 00:26:54.761 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:54.761 aer_cb - Changed Namespace 00:26:54.761 Cleaning up... 00:26:54.761 [ 00:26:54.761 { 00:26:54.761 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:54.761 "subtype": "Discovery", 00:26:54.761 "listen_addresses": [], 00:26:54.761 "allow_any_host": true, 00:26:54.761 "hosts": [] 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:54.761 "subtype": "NVMe", 00:26:54.761 "listen_addresses": [ 00:26:54.761 { 00:26:54.761 "trtype": "TCP", 00:26:54.761 "adrfam": "IPv4", 00:26:54.761 "traddr": "10.0.0.2", 00:26:54.761 "trsvcid": "4420" 00:26:54.761 } 00:26:54.761 ], 00:26:54.761 "allow_any_host": true, 00:26:54.761 "hosts": [], 00:26:54.761 "serial_number": "SPDK00000000000001", 00:26:54.761 "model_number": "SPDK bdev Controller", 00:26:54.761 "max_namespaces": 2, 00:26:54.761 "min_cntlid": 1, 00:26:54.761 "max_cntlid": 65519, 00:26:54.761 "namespaces": [ 00:26:54.761 { 00:26:54.761 "nsid": 1, 00:26:54.761 "bdev_name": "Malloc0", 00:26:54.761 "name": "Malloc0", 00:26:54.761 "nguid": "8E261234A3824A2CAB852F732146ED2E", 00:26:54.761 "uuid": "8e261234-a382-4a2c-ab85-2f732146ed2e" 00:26:54.761 }, 00:26:54.761 { 00:26:54.761 "nsid": 2, 00:26:54.761 "bdev_name": "Malloc1", 00:26:54.761 "name": "Malloc1", 00:26:54.761 "nguid": "068641C2FF744FF6A3FF9A163BB0A2F1", 00:26:54.761 "uuid": "068641c2-ff74-4ff6-a3ff-9a163bb0a2f1" 00:26:54.761 } 00:26:54.761 ] 00:26:54.761 } 00:26:54.761 ] 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1469856 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.761 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:55.017 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.017 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:55.017 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.017 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:55.017 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:55.018 rmmod nvme_tcp 00:26:55.018 rmmod nvme_fabrics 00:26:55.018 rmmod nvme_keyring 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1469763 ']' 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1469763 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1469763 ']' 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1469763 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1469763 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1469763' 00:26:55.018 killing process with pid 1469763 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1469763 00:26:55.018 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1469763 00:26:55.276 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:55.276 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:55.276 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:55.276 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:55.276 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:55.276 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.276 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.276 23:32:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.173 23:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:57.173 00:26:57.173 real 0m5.314s 00:26:57.173 user 0m4.308s 00:26:57.173 sys 0m1.886s 00:26:57.173 23:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:57.173 23:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:57.173 ************************************ 00:26:57.173 END TEST nvmf_aer 00:26:57.173 ************************************ 00:26:57.173 23:32:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:57.173 23:32:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:57.173 23:32:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:57.173 23:32:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.430 ************************************ 00:26:57.431 START TEST nvmf_async_init 00:26:57.431 ************************************ 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:57.431 * Looking for test storage... 00:26:57.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3319ebad421b45268b344ea411b9c001 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:26:57.431 23:32:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:59.367 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:59.367 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.367 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:59.367 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:59.368 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:59.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:59.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:26:59.368 00:26:59.368 --- 10.0.0.2 ping statistics --- 00:26:59.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.368 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:59.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:59.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:26:59.368 00:26:59.368 --- 10.0.0.1 ping statistics --- 00:26:59.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.368 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:59.368 23:32:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:59.368 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1471793 00:26:59.368 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:59.368 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1471793 00:26:59.368 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1471793 ']' 00:26:59.368 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.368 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:59.368 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.368 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:59.368 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:59.368 [2024-07-25 23:32:57.045550] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:26:59.368 [2024-07-25 23:32:57.045630] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.368 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.368 [2024-07-25 23:32:57.084097] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:59.627 [2024-07-25 23:32:57.114571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.627 [2024-07-25 23:32:57.203828] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.627 [2024-07-25 23:32:57.203893] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.627 [2024-07-25 23:32:57.203910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.627 [2024-07-25 23:32:57.203924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.627 [2024-07-25 23:32:57.203936] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.627 [2024-07-25 23:32:57.203965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.627 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:59.627 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:26:59.627 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:59.627 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:59.627 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:59.627 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.627 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:59.627 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.627 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:59.627 [2024-07-25 23:32:57.343376] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.627 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.627 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:59.627 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.627 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:59.886 null0 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3319ebad421b45268b344ea411b9c001 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:59.886 [2024-07-25 23:32:57.383656] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.886 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:00.146 nvme0n1 00:27:00.146 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.146 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:00.146 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.146 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:00.146 [ 00:27:00.146 { 00:27:00.146 "name": "nvme0n1", 00:27:00.146 "aliases": [ 00:27:00.146 "3319ebad-421b-4526-8b34-4ea411b9c001" 00:27:00.146 ], 00:27:00.146 "product_name": "NVMe disk", 00:27:00.146 "block_size": 512, 00:27:00.146 "num_blocks": 2097152, 00:27:00.146 "uuid": "3319ebad-421b-4526-8b34-4ea411b9c001", 00:27:00.146 "assigned_rate_limits": { 00:27:00.146 "rw_ios_per_sec": 0, 00:27:00.146 "rw_mbytes_per_sec": 0, 00:27:00.146 "r_mbytes_per_sec": 0, 00:27:00.146 "w_mbytes_per_sec": 0 00:27:00.146 }, 00:27:00.146 "claimed": false, 00:27:00.146 "zoned": false, 00:27:00.146 "supported_io_types": { 00:27:00.146 "read": true, 00:27:00.146 "write": true, 00:27:00.146 "unmap": false, 00:27:00.146 "flush": true, 00:27:00.146 "reset": true, 00:27:00.146 "nvme_admin": true, 00:27:00.146 "nvme_io": true, 00:27:00.146 "nvme_io_md": false, 00:27:00.146 "write_zeroes": true, 00:27:00.146 "zcopy": false, 00:27:00.146 "get_zone_info": false, 00:27:00.146 "zone_management": false, 00:27:00.146 "zone_append": false, 00:27:00.146 "compare": true, 00:27:00.146 "compare_and_write": true, 00:27:00.146 "abort": true, 00:27:00.146 "seek_hole": false, 00:27:00.146 "seek_data": false, 00:27:00.146 "copy": true, 00:27:00.146 "nvme_iov_md": false 00:27:00.146 }, 00:27:00.146 "memory_domains": [ 00:27:00.146 { 00:27:00.146 "dma_device_id": "system", 00:27:00.146 "dma_device_type": 1 00:27:00.146 } 00:27:00.146 ], 00:27:00.146 "driver_specific": { 00:27:00.146 "nvme": [ 00:27:00.146 { 00:27:00.146 "trid": { 00:27:00.146 "trtype": "TCP", 00:27:00.146 "adrfam": "IPv4", 00:27:00.146 "traddr": "10.0.0.2", 00:27:00.146 "trsvcid": "4420", 00:27:00.146 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:00.146 }, 00:27:00.146 "ctrlr_data": { 00:27:00.146 "cntlid": 1, 00:27:00.146 "vendor_id": "0x8086", 00:27:00.146 "model_number": "SPDK bdev Controller", 00:27:00.146 "serial_number": "00000000000000000000", 00:27:00.146 "firmware_revision": "24.09", 00:27:00.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:00.146 "oacs": { 00:27:00.146 "security": 0, 00:27:00.146 "format": 0, 00:27:00.146 "firmware": 0, 00:27:00.146 "ns_manage": 0 00:27:00.146 }, 00:27:00.146 "multi_ctrlr": true, 00:27:00.146 "ana_reporting": false 00:27:00.146 }, 00:27:00.146 "vs": { 00:27:00.146 "nvme_version": "1.3" 00:27:00.146 }, 00:27:00.146 "ns_data": { 00:27:00.146 "id": 1, 00:27:00.146 "can_share": true 00:27:00.146 } 00:27:00.146 } 00:27:00.146 ], 00:27:00.146 "mp_policy": "active_passive" 00:27:00.146 } 00:27:00.146 } 00:27:00.146 ] 00:27:00.146 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.146 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:00.146 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.146 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:00.146 [2024-07-25 23:32:57.636715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:00.146 [2024-07-25 23:32:57.636789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc6850 (9): Bad file descriptor 00:27:00.146 [2024-07-25 23:32:57.769203] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:00.146 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.146 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:00.146 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.146 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:00.146 [ 00:27:00.146 { 00:27:00.146 "name": "nvme0n1", 00:27:00.146 "aliases": [ 00:27:00.146 "3319ebad-421b-4526-8b34-4ea411b9c001" 00:27:00.146 ], 00:27:00.146 "product_name": "NVMe disk", 00:27:00.146 "block_size": 512, 00:27:00.146 "num_blocks": 2097152, 00:27:00.146 "uuid": "3319ebad-421b-4526-8b34-4ea411b9c001", 00:27:00.146 "assigned_rate_limits": { 00:27:00.146 "rw_ios_per_sec": 0, 00:27:00.146 "rw_mbytes_per_sec": 0, 00:27:00.146 "r_mbytes_per_sec": 0, 00:27:00.146 "w_mbytes_per_sec": 0 00:27:00.146 }, 00:27:00.146 "claimed": false, 00:27:00.146 "zoned": false, 00:27:00.146 "supported_io_types": { 00:27:00.146 "read": true, 00:27:00.146 "write": true, 00:27:00.146 "unmap": false, 00:27:00.147 "flush": true, 00:27:00.147 "reset": true, 00:27:00.147 "nvme_admin": true, 00:27:00.147 "nvme_io": true, 00:27:00.147 "nvme_io_md": false, 00:27:00.147 "write_zeroes": true, 00:27:00.147 "zcopy": false, 00:27:00.147 "get_zone_info": false, 00:27:00.147 "zone_management": false, 00:27:00.147 "zone_append": false, 00:27:00.147 "compare": true, 00:27:00.147 "compare_and_write": true, 00:27:00.147 "abort": true, 00:27:00.147 "seek_hole": false, 00:27:00.147 "seek_data": false, 00:27:00.147 "copy": true, 00:27:00.147 "nvme_iov_md": false 00:27:00.147 }, 00:27:00.147 "memory_domains": [ 00:27:00.147 { 00:27:00.147 "dma_device_id": "system", 00:27:00.147 "dma_device_type": 1 00:27:00.147 } 00:27:00.147 ], 00:27:00.147 "driver_specific": { 00:27:00.147 "nvme": [ 00:27:00.147 { 00:27:00.147 "trid": { 00:27:00.147 "trtype": "TCP", 00:27:00.147 "adrfam": "IPv4", 00:27:00.147 "traddr": "10.0.0.2", 00:27:00.147 "trsvcid": "4420", 00:27:00.147 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:00.147 }, 00:27:00.147 "ctrlr_data": { 00:27:00.147 "cntlid": 2, 00:27:00.147 "vendor_id": "0x8086", 00:27:00.147 "model_number": "SPDK bdev Controller", 00:27:00.147 "serial_number": "00000000000000000000", 00:27:00.147 "firmware_revision": "24.09", 00:27:00.147 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:00.147 "oacs": { 00:27:00.147 "security": 0, 00:27:00.147 "format": 0, 00:27:00.147 "firmware": 0, 00:27:00.147 "ns_manage": 0 00:27:00.147 }, 00:27:00.147 "multi_ctrlr": true, 00:27:00.147 "ana_reporting": false 00:27:00.147 }, 00:27:00.147 "vs": { 00:27:00.147 "nvme_version": "1.3" 00:27:00.147 }, 00:27:00.147 "ns_data": { 00:27:00.147 "id": 1, 00:27:00.147 "can_share": true 00:27:00.147 } 00:27:00.147 } 00:27:00.147 ], 00:27:00.147 "mp_policy": "active_passive" 00:27:00.147 } 00:27:00.147 } 00:27:00.147 ] 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.O8Bo44gDWx 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.O8Bo44gDWx 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:00.147 [2024-07-25 23:32:57.821407] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:00.147 [2024-07-25 23:32:57.821563] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O8Bo44gDWx 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:00.147 [2024-07-25 23:32:57.829430] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O8Bo44gDWx 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.147 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:00.147 [2024-07-25 23:32:57.837454] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:00.147 [2024-07-25 23:32:57.837508] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:00.407 nvme0n1 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:00.407 [ 00:27:00.407 { 00:27:00.407 "name": "nvme0n1", 00:27:00.407 "aliases": [ 00:27:00.407 "3319ebad-421b-4526-8b34-4ea411b9c001" 00:27:00.407 ], 00:27:00.407 "product_name": "NVMe disk", 00:27:00.407 "block_size": 512, 00:27:00.407 "num_blocks": 2097152, 00:27:00.407 "uuid": "3319ebad-421b-4526-8b34-4ea411b9c001", 00:27:00.407 "assigned_rate_limits": { 00:27:00.407 "rw_ios_per_sec": 0, 00:27:00.407 "rw_mbytes_per_sec": 0, 00:27:00.407 "r_mbytes_per_sec": 0, 00:27:00.407 "w_mbytes_per_sec": 0 00:27:00.407 }, 00:27:00.407 "claimed": false, 00:27:00.407 "zoned": false, 00:27:00.407 "supported_io_types": { 00:27:00.407 "read": true, 00:27:00.407 "write": true, 00:27:00.407 "unmap": false, 00:27:00.407 "flush": true, 00:27:00.407 "reset": true, 00:27:00.407 "nvme_admin": true, 00:27:00.407 "nvme_io": true, 00:27:00.407 "nvme_io_md": false, 00:27:00.407 "write_zeroes": true, 00:27:00.407 "zcopy": false, 00:27:00.407 "get_zone_info": false, 00:27:00.407 "zone_management": false, 00:27:00.407 "zone_append": false, 00:27:00.407 "compare": true, 00:27:00.407 "compare_and_write": true, 00:27:00.407 "abort": true, 00:27:00.407 "seek_hole": false, 00:27:00.407 "seek_data": false, 00:27:00.407 "copy": true, 00:27:00.407 "nvme_iov_md": false 00:27:00.407 }, 00:27:00.407 "memory_domains": [ 00:27:00.407 { 00:27:00.407 "dma_device_id": "system", 00:27:00.407 "dma_device_type": 1 00:27:00.407 } 00:27:00.407 ], 00:27:00.407 "driver_specific": { 00:27:00.407 "nvme": [ 00:27:00.407 { 00:27:00.407 "trid": { 00:27:00.407 "trtype": "TCP", 00:27:00.407 "adrfam": "IPv4", 00:27:00.407 "traddr": "10.0.0.2", 00:27:00.407 "trsvcid": "4421", 00:27:00.407 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:00.407 }, 00:27:00.407 "ctrlr_data": { 00:27:00.407 "cntlid": 3, 00:27:00.407 "vendor_id": "0x8086", 00:27:00.407 "model_number": "SPDK bdev Controller", 00:27:00.407 "serial_number": "00000000000000000000", 00:27:00.407 "firmware_revision": "24.09", 00:27:00.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:00.407 "oacs": { 00:27:00.407 "security": 0, 00:27:00.407 "format": 0, 00:27:00.407 "firmware": 0, 00:27:00.407 "ns_manage": 0 00:27:00.407 }, 00:27:00.407 "multi_ctrlr": true, 00:27:00.407 "ana_reporting": false 00:27:00.407 }, 00:27:00.407 "vs": { 00:27:00.407 "nvme_version": "1.3" 00:27:00.407 }, 00:27:00.407 "ns_data": { 00:27:00.407 "id": 1, 00:27:00.407 "can_share": true 00:27:00.407 } 00:27:00.407 } 00:27:00.407 ], 00:27:00.407 "mp_policy": "active_passive" 00:27:00.407 } 00:27:00.407 } 00:27:00.407 ] 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.O8Bo44gDWx 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:00.407 rmmod nvme_tcp 00:27:00.407 rmmod nvme_fabrics 00:27:00.407 rmmod nvme_keyring 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1471793 ']' 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1471793 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1471793 ']' 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1471793 00:27:00.407 23:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:27:00.407 23:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:00.407 23:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1471793 00:27:00.407 23:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:00.407 23:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:00.407 23:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1471793' 00:27:00.407 killing process with pid 1471793 00:27:00.407 23:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1471793 00:27:00.407 [2024-07-25 23:32:58.029246] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:00.407 [2024-07-25 23:32:58.029280] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:00.408 23:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1471793 00:27:00.666 23:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:00.666 23:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:00.666 23:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:00.667 23:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:00.667 23:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:00.667 23:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.667 23:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.667 23:32:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.570 23:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:02.570 00:27:02.570 real 0m5.369s 00:27:02.570 user 0m2.039s 00:27:02.570 sys 0m1.750s 00:27:02.570 23:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:02.570 23:33:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:02.570 ************************************ 00:27:02.570 END TEST nvmf_async_init 00:27:02.570 ************************************ 00:27:02.570 23:33:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:02.570 23:33:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:02.570 23:33:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:02.570 23:33:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.828 ************************************ 00:27:02.828 START TEST dma 00:27:02.828 ************************************ 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:02.828 * Looking for test storage... 00:27:02.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:02.828 23:33:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:02.828 00:27:02.828 real 0m0.063s 00:27:02.829 user 0m0.032s 00:27:02.829 sys 0m0.036s 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:02.829 ************************************ 00:27:02.829 END TEST dma 00:27:02.829 ************************************ 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.829 ************************************ 00:27:02.829 START TEST nvmf_identify 00:27:02.829 ************************************ 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:02.829 * Looking for test storage... 00:27:02.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:02.829 23:33:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:05.361 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:05.361 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:05.361 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:05.361 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:05.361 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:05.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:27:05.362 00:27:05.362 --- 10.0.0.2 ping statistics --- 00:27:05.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.362 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:27:05.362 00:27:05.362 --- 10.0.0.1 ping statistics --- 00:27:05.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.362 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1474026 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1474026 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1474026 ']' 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:05.362 [2024-07-25 23:33:02.684144] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:05.362 [2024-07-25 23:33:02.684223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.362 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.362 [2024-07-25 23:33:02.723767] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:05.362 [2024-07-25 23:33:02.754529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:05.362 [2024-07-25 23:33:02.847086] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.362 [2024-07-25 23:33:02.847149] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.362 [2024-07-25 23:33:02.847176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.362 [2024-07-25 23:33:02.847191] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.362 [2024-07-25 23:33:02.847203] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.362 [2024-07-25 23:33:02.847290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.362 [2024-07-25 23:33:02.847372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.362 [2024-07-25 23:33:02.847470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.362 [2024-07-25 23:33:02.847473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:05.362 [2024-07-25 23:33:02.976525] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:05.362 23:33:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:05.362 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:05.362 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.362 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:05.362 Malloc0 00:27:05.362 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.362 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:05.362 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.362 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:05.362 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.362 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:05.362 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.362 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:05.362 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.362 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:05.363 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.363 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:05.363 [2024-07-25 23:33:03.053685] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.363 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.363 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:05.363 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.363 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:05.363 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.363 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:05.363 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.363 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:05.363 [ 00:27:05.363 { 00:27:05.363 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:05.363 "subtype": "Discovery", 00:27:05.363 "listen_addresses": [ 00:27:05.363 { 00:27:05.363 "trtype": "TCP", 00:27:05.363 "adrfam": "IPv4", 00:27:05.363 "traddr": "10.0.0.2", 00:27:05.363 "trsvcid": "4420" 00:27:05.363 } 00:27:05.363 ], 00:27:05.363 "allow_any_host": true, 00:27:05.363 "hosts": [] 00:27:05.363 }, 00:27:05.363 { 00:27:05.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:05.363 "subtype": "NVMe", 00:27:05.363 "listen_addresses": [ 00:27:05.363 { 00:27:05.363 "trtype": "TCP", 00:27:05.363 "adrfam": "IPv4", 00:27:05.363 "traddr": "10.0.0.2", 00:27:05.363 "trsvcid": "4420" 00:27:05.363 } 00:27:05.363 ], 00:27:05.363 "allow_any_host": true, 00:27:05.363 "hosts": [], 00:27:05.363 "serial_number": "SPDK00000000000001", 00:27:05.363 "model_number": "SPDK bdev Controller", 00:27:05.363 "max_namespaces": 32, 00:27:05.363 "min_cntlid": 1, 00:27:05.363 "max_cntlid": 65519, 00:27:05.363 "namespaces": [ 00:27:05.363 { 00:27:05.363 "nsid": 1, 00:27:05.363 "bdev_name": "Malloc0", 00:27:05.363 "name": "Malloc0", 00:27:05.363 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:05.363 "eui64": "ABCDEF0123456789", 00:27:05.363 "uuid": "6b46fc77-5aa5-4a82-b6aa-5e2550dd8119" 00:27:05.363 } 00:27:05.363 ] 00:27:05.363 } 00:27:05.363 ] 00:27:05.363 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.363 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:05.624 [2024-07-25 23:33:03.095403] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:05.624 [2024-07-25 23:33:03.095460] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474079 ] 00:27:05.624 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.624 [2024-07-25 23:33:03.113835] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:05.624 [2024-07-25 23:33:03.132262] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:05.624 [2024-07-25 23:33:03.132360] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:05.624 [2024-07-25 23:33:03.132370] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:05.624 [2024-07-25 23:33:03.132386] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:05.624 [2024-07-25 23:33:03.132401] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:05.624 [2024-07-25 23:33:03.136134] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:05.624 [2024-07-25 23:33:03.136182] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x156f630 0 00:27:05.624 [2024-07-25 23:33:03.144070] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:05.624 [2024-07-25 23:33:03.144107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:05.624 [2024-07-25 23:33:03.144118] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:05.624 [2024-07-25 23:33:03.144125] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:05.624 [2024-07-25 23:33:03.144181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.624 [2024-07-25 23:33:03.144195] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.624 [2024-07-25 23:33:03.144203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156f630) 00:27:05.624 [2024-07-25 23:33:03.144224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:05.624 [2024-07-25 23:33:03.144252] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15bdf80, cid 0, qid 0 00:27:05.624 [2024-07-25 23:33:03.151073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.624 [2024-07-25 23:33:03.151092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.624 [2024-07-25 23:33:03.151100] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.624 [2024-07-25 23:33:03.151120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15bdf80) on tqpair=0x156f630 00:27:05.624 [2024-07-25 23:33:03.151138] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:05.624 [2024-07-25 23:33:03.151150] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:05.624 [2024-07-25 23:33:03.151160] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:05.624 [2024-07-25 23:33:03.151184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.624 [2024-07-25 23:33:03.151193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.624 [2024-07-25 23:33:03.151200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156f630) 00:27:05.624 [2024-07-25 23:33:03.151212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.624 [2024-07-25 23:33:03.151236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15bdf80, cid 0, qid 0 00:27:05.624 [2024-07-25 23:33:03.151362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.624 [2024-07-25 23:33:03.151378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.624 [2024-07-25 23:33:03.151385] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.624 [2024-07-25 23:33:03.151393] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15bdf80) on tqpair=0x156f630 00:27:05.624 [2024-07-25 23:33:03.151406] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:05.624 [2024-07-25 23:33:03.151420] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:05.624 [2024-07-25 23:33:03.151433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.624 [2024-07-25 23:33:03.151441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.624 [2024-07-25 23:33:03.151451] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156f630) 00:27:05.624 [2024-07-25 23:33:03.151463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.624 [2024-07-25 23:33:03.151485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15bdf80, cid 0, qid 0 00:27:05.624 [2024-07-25 23:33:03.151588] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.624 [2024-07-25 23:33:03.151603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.624 [2024-07-25 23:33:03.151611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.624 [2024-07-25 23:33:03.151618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15bdf80) on tqpair=0x156f630 00:27:05.624 [2024-07-25 23:33:03.151627] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:05.624 [2024-07-25 23:33:03.151642] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:05.624 [2024-07-25 23:33:03.151654] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.624 [2024-07-25 23:33:03.151662] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.624 [2024-07-25 23:33:03.151669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156f630) 00:27:05.624 [2024-07-25 23:33:03.151680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.624 [2024-07-25 23:33:03.151701] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15bdf80, cid 0, qid 0 00:27:05.624 [2024-07-25 23:33:03.151803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.624 [2024-07-25 23:33:03.151816] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.624 [2024-07-25 23:33:03.151823] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.624 [2024-07-25 23:33:03.151830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15bdf80) on tqpair=0x156f630 00:27:05.624 [2024-07-25 23:33:03.151839] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:05.624 [2024-07-25 23:33:03.151856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.624 [2024-07-25 23:33:03.151865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.624 [2024-07-25 23:33:03.151871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156f630) 00:27:05.624 [2024-07-25 23:33:03.151882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.624 [2024-07-25 23:33:03.151904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15bdf80, cid 0, qid 0 00:27:05.624 [2024-07-25 23:33:03.152004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.624 [2024-07-25 23:33:03.152019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.624 [2024-07-25 23:33:03.152026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.624 [2024-07-25 23:33:03.152033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15bdf80) on tqpair=0x156f630 00:27:05.624 [2024-07-25 23:33:03.152043] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:05.625 [2024-07-25 23:33:03.152052] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:05.625 [2024-07-25 23:33:03.152078] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:05.625 [2024-07-25 23:33:03.152190] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:05.625 [2024-07-25 23:33:03.152199] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:05.625 [2024-07-25 23:33:03.152219] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.152227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.152234] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156f630) 00:27:05.625 [2024-07-25 23:33:03.152245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.625 [2024-07-25 23:33:03.152267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15bdf80, cid 0, qid 0 00:27:05.625 [2024-07-25 23:33:03.152411] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.625 [2024-07-25 23:33:03.152427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.625 [2024-07-25 23:33:03.152434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.152441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15bdf80) on tqpair=0x156f630 00:27:05.625 [2024-07-25 23:33:03.152450] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:05.625 [2024-07-25 23:33:03.152466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.152475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.152482] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156f630) 00:27:05.625 [2024-07-25 23:33:03.152493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.625 [2024-07-25 23:33:03.152514] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15bdf80, cid 0, qid 0 00:27:05.625 [2024-07-25 23:33:03.152618] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.625 [2024-07-25 23:33:03.152634] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.625 [2024-07-25 23:33:03.152641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.152648] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15bdf80) on tqpair=0x156f630 00:27:05.625 [2024-07-25 23:33:03.152657] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:05.625 [2024-07-25 23:33:03.152676] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:05.625 [2024-07-25 23:33:03.152690] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:05.625 [2024-07-25 23:33:03.152704] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:05.625 [2024-07-25 23:33:03.152721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.152729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156f630) 00:27:05.625 [2024-07-25 23:33:03.152740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.625 [2024-07-25 23:33:03.152762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15bdf80, cid 0, qid 0 00:27:05.625 [2024-07-25 23:33:03.152919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:05.625 [2024-07-25 23:33:03.152935] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:05.625 [2024-07-25 23:33:03.152942] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.152950] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x156f630): datao=0, datal=4096, cccid=0 00:27:05.625 [2024-07-25 23:33:03.152958] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15bdf80) on tqpair(0x156f630): expected_datao=0, payload_size=4096 00:27:05.625 [2024-07-25 23:33:03.152972] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.152985] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.152994] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.153021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.625 [2024-07-25 23:33:03.153033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.625 [2024-07-25 23:33:03.153040] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.153047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15bdf80) on tqpair=0x156f630 00:27:05.625 [2024-07-25 23:33:03.153070] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:05.625 [2024-07-25 23:33:03.153088] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:05.625 [2024-07-25 23:33:03.153097] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:05.625 [2024-07-25 23:33:03.153106] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:05.625 [2024-07-25 23:33:03.153115] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:05.625 [2024-07-25 23:33:03.153123] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:05.625 [2024-07-25 23:33:03.153139] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:05.625 [2024-07-25 23:33:03.153157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.153166] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.153173] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156f630) 00:27:05.625 [2024-07-25 23:33:03.153184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:05.625 [2024-07-25 23:33:03.153206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15bdf80, cid 0, qid 0 00:27:05.625 [2024-07-25 23:33:03.153356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.625 [2024-07-25 23:33:03.153371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.625 [2024-07-25 23:33:03.153378] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.153385] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15bdf80) on tqpair=0x156f630 00:27:05.625 [2024-07-25 23:33:03.153400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.153408] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.153414] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156f630) 00:27:05.625 [2024-07-25 23:33:03.153425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.625 [2024-07-25 23:33:03.153435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.153442] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.153449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x156f630) 00:27:05.625 [2024-07-25 23:33:03.153458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.625 [2024-07-25 23:33:03.153468] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.153475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.153482] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x156f630) 00:27:05.625 [2024-07-25 23:33:03.153491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.625 [2024-07-25 23:33:03.153505] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.153513] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.153535] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156f630) 00:27:05.625 [2024-07-25 23:33:03.153545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.625 [2024-07-25 23:33:03.153553] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:05.625 [2024-07-25 23:33:03.153572] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:05.625 [2024-07-25 23:33:03.153585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.625 [2024-07-25 23:33:03.153592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x156f630) 00:27:05.626 [2024-07-25 23:33:03.153603] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.626 [2024-07-25 23:33:03.153625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15bdf80, cid 0, qid 0 00:27:05.626 [2024-07-25 23:33:03.153652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15be100, cid 1, qid 0 00:27:05.626 [2024-07-25 23:33:03.153661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15be280, cid 2, qid 0 00:27:05.626 [2024-07-25 23:33:03.153669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15be400, cid 3, qid 0 00:27:05.626 [2024-07-25 23:33:03.153677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15be580, cid 4, qid 0 00:27:05.626 [2024-07-25 23:33:03.153875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.626 [2024-07-25 23:33:03.153891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.626 [2024-07-25 23:33:03.153898] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.153905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15be580) on tqpair=0x156f630 00:27:05.626 [2024-07-25 23:33:03.153916] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:05.626 [2024-07-25 23:33:03.153925] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:05.626 [2024-07-25 23:33:03.153957] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.153967] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x156f630) 00:27:05.626 [2024-07-25 23:33:03.153978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.626 [2024-07-25 23:33:03.153998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15be580, cid 4, qid 0 00:27:05.626 [2024-07-25 23:33:03.154160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:05.626 [2024-07-25 23:33:03.154175] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:05.626 [2024-07-25 23:33:03.154182] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.154189] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x156f630): datao=0, datal=4096, cccid=4 00:27:05.626 [2024-07-25 23:33:03.154197] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15be580) on tqpair(0x156f630): expected_datao=0, payload_size=4096 00:27:05.626 [2024-07-25 23:33:03.154205] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.154228] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.154237] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.154332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.626 [2024-07-25 23:33:03.154345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.626 [2024-07-25 23:33:03.154352] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.154359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15be580) on tqpair=0x156f630 00:27:05.626 [2024-07-25 23:33:03.154380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:05.626 [2024-07-25 23:33:03.154420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.154430] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x156f630) 00:27:05.626 [2024-07-25 23:33:03.154442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.626 [2024-07-25 23:33:03.154453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.154460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.154467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x156f630) 00:27:05.626 [2024-07-25 23:33:03.154477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.626 [2024-07-25 23:33:03.154519] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15be580, cid 4, qid 0 00:27:05.626 [2024-07-25 23:33:03.154532] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15be700, cid 5, qid 0 00:27:05.626 [2024-07-25 23:33:03.154758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:05.626 [2024-07-25 23:33:03.154774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:05.626 [2024-07-25 23:33:03.154781] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.154788] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x156f630): datao=0, datal=1024, cccid=4 00:27:05.626 [2024-07-25 23:33:03.154796] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15be580) on tqpair(0x156f630): expected_datao=0, payload_size=1024 00:27:05.626 [2024-07-25 23:33:03.154804] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.154814] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.154822] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.154831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.626 [2024-07-25 23:33:03.154840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.626 [2024-07-25 23:33:03.154847] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.154854] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15be700) on tqpair=0x156f630 00:27:05.626 [2024-07-25 23:33:03.199087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.626 [2024-07-25 23:33:03.199116] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.626 [2024-07-25 23:33:03.199124] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.199131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15be580) on tqpair=0x156f630 00:27:05.626 [2024-07-25 23:33:03.199150] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.199160] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x156f630) 00:27:05.626 [2024-07-25 23:33:03.199171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.626 [2024-07-25 23:33:03.199207] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15be580, cid 4, qid 0 00:27:05.626 [2024-07-25 23:33:03.199363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:05.626 [2024-07-25 23:33:03.199378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:05.626 [2024-07-25 23:33:03.199386] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.199399] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x156f630): datao=0, datal=3072, cccid=4 00:27:05.626 [2024-07-25 23:33:03.199408] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15be580) on tqpair(0x156f630): expected_datao=0, payload_size=3072 00:27:05.626 [2024-07-25 23:33:03.199416] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.199427] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.199435] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.199478] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.626 [2024-07-25 23:33:03.199490] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.626 [2024-07-25 23:33:03.199497] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.199505] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15be580) on tqpair=0x156f630 00:27:05.626 [2024-07-25 23:33:03.199520] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.199529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x156f630) 00:27:05.626 [2024-07-25 23:33:03.199540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.626 [2024-07-25 23:33:03.199568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15be580, cid 4, qid 0 00:27:05.626 [2024-07-25 23:33:03.199689] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:05.626 [2024-07-25 23:33:03.199704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:05.626 [2024-07-25 23:33:03.199711] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.199718] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x156f630): datao=0, datal=8, cccid=4 00:27:05.626 [2024-07-25 23:33:03.199726] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15be580) on tqpair(0x156f630): expected_datao=0, payload_size=8 00:27:05.626 [2024-07-25 23:33:03.199734] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.199744] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.199752] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.240231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.626 [2024-07-25 23:33:03.240250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.626 [2024-07-25 23:33:03.240258] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.626 [2024-07-25 23:33:03.240265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15be580) on tqpair=0x156f630 00:27:05.626 ===================================================== 00:27:05.627 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:05.627 ===================================================== 00:27:05.627 Controller Capabilities/Features 00:27:05.627 ================================ 00:27:05.627 Vendor ID: 0000 00:27:05.627 Subsystem Vendor ID: 0000 00:27:05.627 Serial Number: .................... 00:27:05.627 Model Number: ........................................ 00:27:05.627 Firmware Version: 24.09 00:27:05.627 Recommended Arb Burst: 0 00:27:05.627 IEEE OUI Identifier: 00 00 00 00:27:05.627 Multi-path I/O 00:27:05.627 May have multiple subsystem ports: No 00:27:05.627 May have multiple controllers: No 00:27:05.627 Associated with SR-IOV VF: No 00:27:05.627 Max Data Transfer Size: 131072 00:27:05.627 Max Number of Namespaces: 0 00:27:05.627 Max Number of I/O Queues: 1024 00:27:05.627 NVMe Specification Version (VS): 1.3 00:27:05.627 NVMe Specification Version (Identify): 1.3 00:27:05.627 Maximum Queue Entries: 128 00:27:05.627 Contiguous Queues Required: Yes 00:27:05.627 Arbitration Mechanisms Supported 00:27:05.627 Weighted Round Robin: Not Supported 00:27:05.627 Vendor Specific: Not Supported 00:27:05.627 Reset Timeout: 15000 ms 00:27:05.627 Doorbell Stride: 4 bytes 00:27:05.627 NVM Subsystem Reset: Not Supported 00:27:05.627 Command Sets Supported 00:27:05.627 NVM Command Set: Supported 00:27:05.627 Boot Partition: Not Supported 00:27:05.627 Memory Page Size Minimum: 4096 bytes 00:27:05.627 Memory Page Size Maximum: 4096 bytes 00:27:05.627 Persistent Memory Region: Not Supported 00:27:05.627 Optional Asynchronous Events Supported 00:27:05.627 Namespace Attribute Notices: Not Supported 00:27:05.627 Firmware Activation Notices: Not Supported 00:27:05.627 ANA Change Notices: Not Supported 00:27:05.627 PLE Aggregate Log Change Notices: Not Supported 00:27:05.627 LBA Status Info Alert Notices: Not Supported 00:27:05.627 EGE Aggregate Log Change Notices: Not Supported 00:27:05.627 Normal NVM Subsystem Shutdown event: Not Supported 00:27:05.627 Zone Descriptor Change Notices: Not Supported 00:27:05.627 Discovery Log Change Notices: Supported 00:27:05.627 Controller Attributes 00:27:05.627 128-bit Host Identifier: Not Supported 00:27:05.627 Non-Operational Permissive Mode: Not Supported 00:27:05.627 NVM Sets: Not Supported 00:27:05.627 Read Recovery Levels: Not Supported 00:27:05.627 Endurance Groups: Not Supported 00:27:05.627 Predictable Latency Mode: Not Supported 00:27:05.627 Traffic Based Keep ALive: Not Supported 00:27:05.627 Namespace Granularity: Not Supported 00:27:05.627 SQ Associations: Not Supported 00:27:05.627 UUID List: Not Supported 00:27:05.627 Multi-Domain Subsystem: Not Supported 00:27:05.627 Fixed Capacity Management: Not Supported 00:27:05.627 Variable Capacity Management: Not Supported 00:27:05.627 Delete Endurance Group: Not Supported 00:27:05.627 Delete NVM Set: Not Supported 00:27:05.627 Extended LBA Formats Supported: Not Supported 00:27:05.627 Flexible Data Placement Supported: Not Supported 00:27:05.627 00:27:05.627 Controller Memory Buffer Support 00:27:05.627 ================================ 00:27:05.627 Supported: No 00:27:05.627 00:27:05.627 Persistent Memory Region Support 00:27:05.627 ================================ 00:27:05.627 Supported: No 00:27:05.627 00:27:05.627 Admin Command Set Attributes 00:27:05.627 ============================ 00:27:05.627 Security Send/Receive: Not Supported 00:27:05.627 Format NVM: Not Supported 00:27:05.627 Firmware Activate/Download: Not Supported 00:27:05.627 Namespace Management: Not Supported 00:27:05.627 Device Self-Test: Not Supported 00:27:05.627 Directives: Not Supported 00:27:05.627 NVMe-MI: Not Supported 00:27:05.627 Virtualization Management: Not Supported 00:27:05.627 Doorbell Buffer Config: Not Supported 00:27:05.627 Get LBA Status Capability: Not Supported 00:27:05.627 Command & Feature Lockdown Capability: Not Supported 00:27:05.627 Abort Command Limit: 1 00:27:05.627 Async Event Request Limit: 4 00:27:05.627 Number of Firmware Slots: N/A 00:27:05.627 Firmware Slot 1 Read-Only: N/A 00:27:05.627 Firmware Activation Without Reset: N/A 00:27:05.627 Multiple Update Detection Support: N/A 00:27:05.627 Firmware Update Granularity: No Information Provided 00:27:05.627 Per-Namespace SMART Log: No 00:27:05.627 Asymmetric Namespace Access Log Page: Not Supported 00:27:05.627 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:05.627 Command Effects Log Page: Not Supported 00:27:05.627 Get Log Page Extended Data: Supported 00:27:05.627 Telemetry Log Pages: Not Supported 00:27:05.627 Persistent Event Log Pages: Not Supported 00:27:05.627 Supported Log Pages Log Page: May Support 00:27:05.627 Commands Supported & Effects Log Page: Not Supported 00:27:05.627 Feature Identifiers & Effects Log Page:May Support 00:27:05.627 NVMe-MI Commands & Effects Log Page: May Support 00:27:05.627 Data Area 4 for Telemetry Log: Not Supported 00:27:05.627 Error Log Page Entries Supported: 128 00:27:05.627 Keep Alive: Not Supported 00:27:05.627 00:27:05.627 NVM Command Set Attributes 00:27:05.627 ========================== 00:27:05.627 Submission Queue Entry Size 00:27:05.627 Max: 1 00:27:05.627 Min: 1 00:27:05.627 Completion Queue Entry Size 00:27:05.627 Max: 1 00:27:05.627 Min: 1 00:27:05.627 Number of Namespaces: 0 00:27:05.627 Compare Command: Not Supported 00:27:05.627 Write Uncorrectable Command: Not Supported 00:27:05.627 Dataset Management Command: Not Supported 00:27:05.627 Write Zeroes Command: Not Supported 00:27:05.627 Set Features Save Field: Not Supported 00:27:05.627 Reservations: Not Supported 00:27:05.627 Timestamp: Not Supported 00:27:05.627 Copy: Not Supported 00:27:05.627 Volatile Write Cache: Not Present 00:27:05.627 Atomic Write Unit (Normal): 1 00:27:05.627 Atomic Write Unit (PFail): 1 00:27:05.627 Atomic Compare & Write Unit: 1 00:27:05.627 Fused Compare & Write: Supported 00:27:05.627 Scatter-Gather List 00:27:05.627 SGL Command Set: Supported 00:27:05.627 SGL Keyed: Supported 00:27:05.627 SGL Bit Bucket Descriptor: Not Supported 00:27:05.627 SGL Metadata Pointer: Not Supported 00:27:05.627 Oversized SGL: Not Supported 00:27:05.627 SGL Metadata Address: Not Supported 00:27:05.627 SGL Offset: Supported 00:27:05.627 Transport SGL Data Block: Not Supported 00:27:05.627 Replay Protected Memory Block: Not Supported 00:27:05.627 00:27:05.627 Firmware Slot Information 00:27:05.627 ========================= 00:27:05.627 Active slot: 0 00:27:05.627 00:27:05.627 00:27:05.627 Error Log 00:27:05.627 ========= 00:27:05.627 00:27:05.627 Active Namespaces 00:27:05.627 ================= 00:27:05.627 Discovery Log Page 00:27:05.627 ================== 00:27:05.627 Generation Counter: 2 00:27:05.627 Number of Records: 2 00:27:05.627 Record Format: 0 00:27:05.627 00:27:05.627 Discovery Log Entry 0 00:27:05.627 ---------------------- 00:27:05.627 Transport Type: 3 (TCP) 00:27:05.627 Address Family: 1 (IPv4) 00:27:05.627 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:05.627 Entry Flags: 00:27:05.627 Duplicate Returned Information: 1 00:27:05.627 Explicit Persistent Connection Support for Discovery: 1 00:27:05.627 Transport Requirements: 00:27:05.627 Secure Channel: Not Required 00:27:05.627 Port ID: 0 (0x0000) 00:27:05.627 Controller ID: 65535 (0xffff) 00:27:05.627 Admin Max SQ Size: 128 00:27:05.628 Transport Service Identifier: 4420 00:27:05.628 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:05.628 Transport Address: 10.0.0.2 00:27:05.628 Discovery Log Entry 1 00:27:05.628 ---------------------- 00:27:05.628 Transport Type: 3 (TCP) 00:27:05.628 Address Family: 1 (IPv4) 00:27:05.628 Subsystem Type: 2 (NVM Subsystem) 00:27:05.628 Entry Flags: 00:27:05.628 Duplicate Returned Information: 0 00:27:05.628 Explicit Persistent Connection Support for Discovery: 0 00:27:05.628 Transport Requirements: 00:27:05.628 Secure Channel: Not Required 00:27:05.628 Port ID: 0 (0x0000) 00:27:05.628 Controller ID: 65535 (0xffff) 00:27:05.628 Admin Max SQ Size: 128 00:27:05.628 Transport Service Identifier: 4420 00:27:05.628 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:05.628 Transport Address: 10.0.0.2 [2024-07-25 23:33:03.240375] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:05.628 [2024-07-25 23:33:03.240399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15bdf80) on tqpair=0x156f630 00:27:05.628 [2024-07-25 23:33:03.240413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.628 [2024-07-25 23:33:03.240423] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15be100) on tqpair=0x156f630 00:27:05.628 [2024-07-25 23:33:03.240431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.628 [2024-07-25 23:33:03.240440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15be280) on tqpair=0x156f630 00:27:05.628 [2024-07-25 23:33:03.240448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.628 [2024-07-25 23:33:03.240456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15be400) on tqpair=0x156f630 00:27:05.628 [2024-07-25 23:33:03.240464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.628 [2024-07-25 23:33:03.240486] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.240496] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.240503] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156f630) 00:27:05.628 [2024-07-25 23:33:03.240530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-07-25 23:33:03.240556] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15be400, cid 3, qid 0 00:27:05.628 [2024-07-25 23:33:03.240696] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.628 [2024-07-25 23:33:03.240712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.628 [2024-07-25 23:33:03.240719] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.240727] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15be400) on tqpair=0x156f630 00:27:05.628 [2024-07-25 23:33:03.240740] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.240748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.240755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156f630) 00:27:05.628 [2024-07-25 23:33:03.240766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-07-25 23:33:03.240793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15be400, cid 3, qid 0 00:27:05.628 [2024-07-25 23:33:03.240916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.628 [2024-07-25 23:33:03.240931] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.628 [2024-07-25 23:33:03.240938] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.240946] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15be400) on tqpair=0x156f630 00:27:05.628 [2024-07-25 23:33:03.240956] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:05.628 [2024-07-25 23:33:03.240966] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:05.628 [2024-07-25 23:33:03.240983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.240992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.240999] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156f630) 00:27:05.628 [2024-07-25 23:33:03.241010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-07-25 23:33:03.241031] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15be400, cid 3, qid 0 00:27:05.628 [2024-07-25 23:33:03.241180] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.628 [2024-07-25 23:33:03.241196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.628 [2024-07-25 23:33:03.241203] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.241210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15be400) on tqpair=0x156f630 00:27:05.628 [2024-07-25 23:33:03.241228] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.241238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.241245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156f630) 00:27:05.628 [2024-07-25 23:33:03.241255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-07-25 23:33:03.241277] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15be400, cid 3, qid 0 00:27:05.628 [2024-07-25 23:33:03.241383] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.628 [2024-07-25 23:33:03.241398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.628 [2024-07-25 23:33:03.241409] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.241417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15be400) on tqpair=0x156f630 00:27:05.628 [2024-07-25 23:33:03.241433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.241443] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.241450] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156f630) 00:27:05.628 [2024-07-25 23:33:03.241461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-07-25 23:33:03.241482] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15be400, cid 3, qid 0 00:27:05.628 [2024-07-25 23:33:03.241574] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.628 [2024-07-25 23:33:03.241589] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.628 [2024-07-25 23:33:03.241596] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.241603] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15be400) on tqpair=0x156f630 00:27:05.628 [2024-07-25 23:33:03.241620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.241630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.241637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156f630) 00:27:05.628 [2024-07-25 23:33:03.241648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-07-25 23:33:03.241669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15be400, cid 3, qid 0 00:27:05.628 [2024-07-25 23:33:03.241784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.628 [2024-07-25 23:33:03.241799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.628 [2024-07-25 23:33:03.241806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.241813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15be400) on tqpair=0x156f630 00:27:05.628 [2024-07-25 23:33:03.241830] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.241839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.241846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156f630) 00:27:05.628 [2024-07-25 23:33:03.241857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.628 [2024-07-25 23:33:03.241878] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15be400, cid 3, qid 0 00:27:05.628 [2024-07-25 23:33:03.241985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.628 [2024-07-25 23:33:03.242000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.628 [2024-07-25 23:33:03.242008] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.628 [2024-07-25 23:33:03.242015] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15be400) on tqpair=0x156f630 00:27:05.629 [2024-07-25 23:33:03.242031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.629 [2024-07-25 23:33:03.242041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.629 [2024-07-25 23:33:03.242047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156f630) 00:27:05.629 [2024-07-25 23:33:03.243152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.629 [2024-07-25 23:33:03.243184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15be400, cid 3, qid 0 00:27:05.629 [2024-07-25 23:33:03.243337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.629 [2024-07-25 23:33:03.243352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.629 [2024-07-25 23:33:03.243359] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.629 [2024-07-25 23:33:03.243371] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15be400) on tqpair=0x156f630 00:27:05.629 [2024-07-25 23:33:03.243386] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 2 milliseconds 00:27:05.629 00:27:05.629 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:05.629 [2024-07-25 23:33:03.282519] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:05.629 [2024-07-25 23:33:03.282566] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474172 ] 00:27:05.629 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.629 [2024-07-25 23:33:03.299372] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:05.629 [2024-07-25 23:33:03.316912] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:05.629 [2024-07-25 23:33:03.316956] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:05.629 [2024-07-25 23:33:03.316965] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:05.629 [2024-07-25 23:33:03.316982] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:05.629 [2024-07-25 23:33:03.316994] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:05.629 [2024-07-25 23:33:03.320098] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:05.629 [2024-07-25 23:33:03.320139] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x139c630 0 00:27:05.629 [2024-07-25 23:33:03.328072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:05.629 [2024-07-25 23:33:03.328095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:05.629 [2024-07-25 23:33:03.328104] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:05.629 [2024-07-25 23:33:03.328110] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:05.629 [2024-07-25 23:33:03.328148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.629 [2024-07-25 23:33:03.328159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.629 [2024-07-25 23:33:03.328166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139c630) 00:27:05.629 [2024-07-25 23:33:03.328181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:05.629 [2024-07-25 23:33:03.328206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eaf80, cid 0, qid 0 00:27:05.629 [2024-07-25 23:33:03.336073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.629 [2024-07-25 23:33:03.336091] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.629 [2024-07-25 23:33:03.336098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.629 [2024-07-25 23:33:03.336106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eaf80) on tqpair=0x139c630 00:27:05.629 [2024-07-25 23:33:03.336119] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:05.629 [2024-07-25 23:33:03.336130] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:05.629 [2024-07-25 23:33:03.336139] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:05.629 [2024-07-25 23:33:03.336160] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.629 [2024-07-25 23:33:03.336170] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.629 [2024-07-25 23:33:03.336177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139c630) 00:27:05.629 [2024-07-25 23:33:03.336188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.629 [2024-07-25 23:33:03.336211] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eaf80, cid 0, qid 0 00:27:05.629 [2024-07-25 23:33:03.336359] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.629 [2024-07-25 23:33:03.336371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.629 [2024-07-25 23:33:03.336378] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.629 [2024-07-25 23:33:03.336386] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eaf80) on tqpair=0x139c630 00:27:05.629 [2024-07-25 23:33:03.336398] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:05.629 [2024-07-25 23:33:03.336412] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:05.629 [2024-07-25 23:33:03.336425] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.629 [2024-07-25 23:33:03.336433] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.629 [2024-07-25 23:33:03.336439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139c630) 00:27:05.629 [2024-07-25 23:33:03.336450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.629 [2024-07-25 23:33:03.336472] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eaf80, cid 0, qid 0 00:27:05.629 [2024-07-25 23:33:03.336569] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.629 [2024-07-25 23:33:03.336585] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.629 [2024-07-25 23:33:03.336592] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.629 [2024-07-25 23:33:03.336599] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eaf80) on tqpair=0x139c630 00:27:05.629 [2024-07-25 23:33:03.336608] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:05.629 [2024-07-25 23:33:03.336622] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:05.629 [2024-07-25 23:33:03.336634] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.629 [2024-07-25 23:33:03.336642] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.630 [2024-07-25 23:33:03.336649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139c630) 00:27:05.630 [2024-07-25 23:33:03.336660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.630 [2024-07-25 23:33:03.336681] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eaf80, cid 0, qid 0 00:27:05.630 [2024-07-25 23:33:03.336780] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.630 [2024-07-25 23:33:03.336792] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.630 [2024-07-25 23:33:03.336799] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.630 [2024-07-25 23:33:03.336806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eaf80) on tqpair=0x139c630 00:27:05.630 [2024-07-25 23:33:03.336814] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:05.630 [2024-07-25 23:33:03.336830] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.630 [2024-07-25 23:33:03.336839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.630 [2024-07-25 23:33:03.336846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139c630) 00:27:05.630 [2024-07-25 23:33:03.336860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.630 [2024-07-25 23:33:03.336882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eaf80, cid 0, qid 0 00:27:05.630 [2024-07-25 23:33:03.336978] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.630 [2024-07-25 23:33:03.336993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.630 [2024-07-25 23:33:03.337000] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.630 [2024-07-25 23:33:03.337007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eaf80) on tqpair=0x139c630 00:27:05.630 [2024-07-25 23:33:03.337015] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:05.630 [2024-07-25 23:33:03.337023] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:05.630 [2024-07-25 23:33:03.337036] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:05.630 [2024-07-25 23:33:03.337146] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:05.630 [2024-07-25 23:33:03.337156] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:05.630 [2024-07-25 23:33:03.337184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.630 [2024-07-25 23:33:03.337191] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.630 [2024-07-25 23:33:03.337198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139c630) 00:27:05.630 [2024-07-25 23:33:03.337208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.630 [2024-07-25 23:33:03.337230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eaf80, cid 0, qid 0 00:27:05.630 [2024-07-25 23:33:03.337372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.630 [2024-07-25 23:33:03.337385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.630 [2024-07-25 23:33:03.337392] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.630 [2024-07-25 23:33:03.337399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eaf80) on tqpair=0x139c630 00:27:05.630 [2024-07-25 23:33:03.337408] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:05.630 [2024-07-25 23:33:03.337424] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.630 [2024-07-25 23:33:03.337432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.630 [2024-07-25 23:33:03.337439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139c630) 00:27:05.630 [2024-07-25 23:33:03.337450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.630 [2024-07-25 23:33:03.337471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eaf80, cid 0, qid 0 00:27:05.630 [2024-07-25 23:33:03.337566] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.630 [2024-07-25 23:33:03.337580] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.630 [2024-07-25 23:33:03.337587] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.630 [2024-07-25 23:33:03.337594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eaf80) on tqpair=0x139c630 00:27:05.630 [2024-07-25 23:33:03.337602] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:05.630 [2024-07-25 23:33:03.337611] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:05.630 [2024-07-25 23:33:03.337624] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:05.630 [2024-07-25 23:33:03.337645] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:05.630 [2024-07-25 23:33:03.337659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.630 [2024-07-25 23:33:03.337667] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139c630) 00:27:05.630 [2024-07-25 23:33:03.337678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.630 [2024-07-25 23:33:03.337699] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eaf80, cid 0, qid 0 00:27:05.630 [2024-07-25 23:33:03.337831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:05.630 [2024-07-25 23:33:03.337846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:05.630 [2024-07-25 23:33:03.337853] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:05.630 [2024-07-25 23:33:03.337860] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139c630): datao=0, datal=4096, cccid=0 00:27:05.630 [2024-07-25 23:33:03.337868] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13eaf80) on tqpair(0x139c630): expected_datao=0, payload_size=4096 00:27:05.630 [2024-07-25 23:33:03.337876] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.630 [2024-07-25 23:33:03.337894] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:05.630 [2024-07-25 23:33:03.337904] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.379225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.889 [2024-07-25 23:33:03.379244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.889 [2024-07-25 23:33:03.379252] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.379259] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eaf80) on tqpair=0x139c630 00:27:05.889 [2024-07-25 23:33:03.379270] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:05.889 [2024-07-25 23:33:03.379279] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:05.889 [2024-07-25 23:33:03.379286] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:05.889 [2024-07-25 23:33:03.379293] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:05.889 [2024-07-25 23:33:03.379301] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:05.889 [2024-07-25 23:33:03.379309] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:05.889 [2024-07-25 23:33:03.379324] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:05.889 [2024-07-25 23:33:03.379341] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.379350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.379357] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139c630) 00:27:05.889 [2024-07-25 23:33:03.379368] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:05.889 [2024-07-25 23:33:03.379391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eaf80, cid 0, qid 0 00:27:05.889 [2024-07-25 23:33:03.379491] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.889 [2024-07-25 23:33:03.379506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.889 [2024-07-25 23:33:03.379513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.379520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eaf80) on tqpair=0x139c630 00:27:05.889 [2024-07-25 23:33:03.379532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.379543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.379551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139c630) 00:27:05.889 [2024-07-25 23:33:03.379561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.889 [2024-07-25 23:33:03.379572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.379579] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.379586] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x139c630) 00:27:05.889 [2024-07-25 23:33:03.379595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.889 [2024-07-25 23:33:03.379604] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.379612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.379618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x139c630) 00:27:05.889 [2024-07-25 23:33:03.379627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.889 [2024-07-25 23:33:03.379637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.379644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.379650] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139c630) 00:27:05.889 [2024-07-25 23:33:03.379659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.889 [2024-07-25 23:33:03.379682] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:05.889 [2024-07-25 23:33:03.379701] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:05.889 [2024-07-25 23:33:03.379713] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.379720] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139c630) 00:27:05.889 [2024-07-25 23:33:03.379730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-07-25 23:33:03.379751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eaf80, cid 0, qid 0 00:27:05.889 [2024-07-25 23:33:03.379777] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb100, cid 1, qid 0 00:27:05.889 [2024-07-25 23:33:03.379785] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb280, cid 2, qid 0 00:27:05.889 [2024-07-25 23:33:03.379793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb400, cid 3, qid 0 00:27:05.889 [2024-07-25 23:33:03.379801] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb580, cid 4, qid 0 00:27:05.889 [2024-07-25 23:33:03.380014] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.889 [2024-07-25 23:33:03.380029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.889 [2024-07-25 23:33:03.380037] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.380044] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb580) on tqpair=0x139c630 00:27:05.889 [2024-07-25 23:33:03.380052] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:05.889 [2024-07-25 23:33:03.384072] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:05.889 [2024-07-25 23:33:03.384096] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:05.889 [2024-07-25 23:33:03.384113] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:05.889 [2024-07-25 23:33:03.384125] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.384132] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.384139] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139c630) 00:27:05.889 [2024-07-25 23:33:03.384149] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:05.889 [2024-07-25 23:33:03.384171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb580, cid 4, qid 0 00:27:05.889 [2024-07-25 23:33:03.384317] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.889 [2024-07-25 23:33:03.384329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.889 [2024-07-25 23:33:03.384337] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.384344] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb580) on tqpair=0x139c630 00:27:05.889 [2024-07-25 23:33:03.384413] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:05.889 [2024-07-25 23:33:03.384434] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:05.889 [2024-07-25 23:33:03.384448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.384456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139c630) 00:27:05.889 [2024-07-25 23:33:03.384467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.889 [2024-07-25 23:33:03.384504] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb580, cid 4, qid 0 00:27:05.889 [2024-07-25 23:33:03.384696] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:05.889 [2024-07-25 23:33:03.384709] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:05.889 [2024-07-25 23:33:03.384716] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.384723] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139c630): datao=0, datal=4096, cccid=4 00:27:05.889 [2024-07-25 23:33:03.384731] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13eb580) on tqpair(0x139c630): expected_datao=0, payload_size=4096 00:27:05.889 [2024-07-25 23:33:03.384738] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.384755] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.384765] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.425181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.889 [2024-07-25 23:33:03.425200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.889 [2024-07-25 23:33:03.425208] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.889 [2024-07-25 23:33:03.425216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb580) on tqpair=0x139c630 00:27:05.889 [2024-07-25 23:33:03.425232] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:05.889 [2024-07-25 23:33:03.425251] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:05.890 [2024-07-25 23:33:03.425270] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:05.890 [2024-07-25 23:33:03.425284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.425292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139c630) 00:27:05.890 [2024-07-25 23:33:03.425303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-07-25 23:33:03.425341] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb580, cid 4, qid 0 00:27:05.890 [2024-07-25 23:33:03.425469] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:05.890 [2024-07-25 23:33:03.425482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:05.890 [2024-07-25 23:33:03.425489] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.425496] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139c630): datao=0, datal=4096, cccid=4 00:27:05.890 [2024-07-25 23:33:03.425504] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13eb580) on tqpair(0x139c630): expected_datao=0, payload_size=4096 00:27:05.890 [2024-07-25 23:33:03.425511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.425528] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.425537] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.471090] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.890 [2024-07-25 23:33:03.471117] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.890 [2024-07-25 23:33:03.471124] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.471131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb580) on tqpair=0x139c630 00:27:05.890 [2024-07-25 23:33:03.471155] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:05.890 [2024-07-25 23:33:03.471175] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:05.890 [2024-07-25 23:33:03.471196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.471203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139c630) 00:27:05.890 [2024-07-25 23:33:03.471215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-07-25 23:33:03.471237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb580, cid 4, qid 0 00:27:05.890 [2024-07-25 23:33:03.471395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:05.890 [2024-07-25 23:33:03.471408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:05.890 [2024-07-25 23:33:03.471415] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.471422] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139c630): datao=0, datal=4096, cccid=4 00:27:05.890 [2024-07-25 23:33:03.471430] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13eb580) on tqpair(0x139c630): expected_datao=0, payload_size=4096 00:27:05.890 [2024-07-25 23:33:03.471437] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.471448] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.471456] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.471468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.890 [2024-07-25 23:33:03.471478] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.890 [2024-07-25 23:33:03.471485] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.471492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb580) on tqpair=0x139c630 00:27:05.890 [2024-07-25 23:33:03.471506] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:05.890 [2024-07-25 23:33:03.471521] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:05.890 [2024-07-25 23:33:03.471537] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:05.890 [2024-07-25 23:33:03.471554] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:05.890 [2024-07-25 23:33:03.471564] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:05.890 [2024-07-25 23:33:03.471574] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:05.890 [2024-07-25 23:33:03.471583] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:05.890 [2024-07-25 23:33:03.471591] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:05.890 [2024-07-25 23:33:03.471600] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:05.890 [2024-07-25 23:33:03.471620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.471629] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139c630) 00:27:05.890 [2024-07-25 23:33:03.471640] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-07-25 23:33:03.471651] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.471658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.471665] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x139c630) 00:27:05.890 [2024-07-25 23:33:03.471690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:05.890 [2024-07-25 23:33:03.471715] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb580, cid 4, qid 0 00:27:05.890 [2024-07-25 23:33:03.471727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb700, cid 5, qid 0 00:27:05.890 [2024-07-25 23:33:03.471891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.890 [2024-07-25 23:33:03.471903] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.890 [2024-07-25 23:33:03.471911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.471918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb580) on tqpair=0x139c630 00:27:05.890 [2024-07-25 23:33:03.471928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.890 [2024-07-25 23:33:03.471937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.890 [2024-07-25 23:33:03.471944] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.471951] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb700) on tqpair=0x139c630 00:27:05.890 [2024-07-25 23:33:03.471967] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.471976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x139c630) 00:27:05.890 [2024-07-25 23:33:03.471987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-07-25 23:33:03.472007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb700, cid 5, qid 0 00:27:05.890 [2024-07-25 23:33:03.472120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.890 [2024-07-25 23:33:03.472135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.890 [2024-07-25 23:33:03.472143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.472149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb700) on tqpair=0x139c630 00:27:05.890 [2024-07-25 23:33:03.472166] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.472175] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x139c630) 00:27:05.890 [2024-07-25 23:33:03.472199] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-07-25 23:33:03.472221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb700, cid 5, qid 0 00:27:05.890 [2024-07-25 23:33:03.472320] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.890 [2024-07-25 23:33:03.472334] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.890 [2024-07-25 23:33:03.472342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.472349] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb700) on tqpair=0x139c630 00:27:05.890 [2024-07-25 23:33:03.472365] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.472374] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x139c630) 00:27:05.890 [2024-07-25 23:33:03.472385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-07-25 23:33:03.472405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb700, cid 5, qid 0 00:27:05.890 [2024-07-25 23:33:03.472507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.890 [2024-07-25 23:33:03.472520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.890 [2024-07-25 23:33:03.472527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.472534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb700) on tqpair=0x139c630 00:27:05.890 [2024-07-25 23:33:03.472557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.472568] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x139c630) 00:27:05.890 [2024-07-25 23:33:03.472579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-07-25 23:33:03.472591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.472598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139c630) 00:27:05.890 [2024-07-25 23:33:03.472608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-07-25 23:33:03.472619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.472626] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x139c630) 00:27:05.890 [2024-07-25 23:33:03.472636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-07-25 23:33:03.472647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.472655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x139c630) 00:27:05.890 [2024-07-25 23:33:03.472664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.890 [2024-07-25 23:33:03.472700] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb700, cid 5, qid 0 00:27:05.890 [2024-07-25 23:33:03.472712] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb580, cid 4, qid 0 00:27:05.890 [2024-07-25 23:33:03.472719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb880, cid 6, qid 0 00:27:05.890 [2024-07-25 23:33:03.472727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eba00, cid 7, qid 0 00:27:05.890 [2024-07-25 23:33:03.472991] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:05.890 [2024-07-25 23:33:03.473004] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:05.890 [2024-07-25 23:33:03.473011] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473021] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139c630): datao=0, datal=8192, cccid=5 00:27:05.890 [2024-07-25 23:33:03.473030] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13eb700) on tqpair(0x139c630): expected_datao=0, payload_size=8192 00:27:05.890 [2024-07-25 23:33:03.473037] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473074] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473086] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:05.890 [2024-07-25 23:33:03.473105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:05.890 [2024-07-25 23:33:03.473111] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473118] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139c630): datao=0, datal=512, cccid=4 00:27:05.890 [2024-07-25 23:33:03.473126] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13eb580) on tqpair(0x139c630): expected_datao=0, payload_size=512 00:27:05.890 [2024-07-25 23:33:03.473133] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473143] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473150] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:05.890 [2024-07-25 23:33:03.473168] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:05.890 [2024-07-25 23:33:03.473175] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473181] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139c630): datao=0, datal=512, cccid=6 00:27:05.890 [2024-07-25 23:33:03.473189] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13eb880) on tqpair(0x139c630): expected_datao=0, payload_size=512 00:27:05.890 [2024-07-25 23:33:03.473197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473206] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473213] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473222] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:05.890 [2024-07-25 23:33:03.473231] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:05.890 [2024-07-25 23:33:03.473237] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473244] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139c630): datao=0, datal=4096, cccid=7 00:27:05.890 [2024-07-25 23:33:03.473252] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13eba00) on tqpair(0x139c630): expected_datao=0, payload_size=4096 00:27:05.890 [2024-07-25 23:33:03.473259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473269] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473276] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.890 [2024-07-25 23:33:03.473298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.890 [2024-07-25 23:33:03.473304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb700) on tqpair=0x139c630 00:27:05.890 [2024-07-25 23:33:03.473330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.890 [2024-07-25 23:33:03.473341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.890 [2024-07-25 23:33:03.473348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb580) on tqpair=0x139c630 00:27:05.890 [2024-07-25 23:33:03.473385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.890 [2024-07-25 23:33:03.473395] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.890 [2024-07-25 23:33:03.473404] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb880) on tqpair=0x139c630 00:27:05.890 [2024-07-25 23:33:03.473436] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.890 [2024-07-25 23:33:03.473446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.890 [2024-07-25 23:33:03.473452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.890 [2024-07-25 23:33:03.473459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eba00) on tqpair=0x139c630 00:27:05.890 ===================================================== 00:27:05.890 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:05.890 ===================================================== 00:27:05.890 Controller Capabilities/Features 00:27:05.890 ================================ 00:27:05.890 Vendor ID: 8086 00:27:05.890 Subsystem Vendor ID: 8086 00:27:05.890 Serial Number: SPDK00000000000001 00:27:05.890 Model Number: SPDK bdev Controller 00:27:05.890 Firmware Version: 24.09 00:27:05.890 Recommended Arb Burst: 6 00:27:05.890 IEEE OUI Identifier: e4 d2 5c 00:27:05.890 Multi-path I/O 00:27:05.890 May have multiple subsystem ports: Yes 00:27:05.890 May have multiple controllers: Yes 00:27:05.890 Associated with SR-IOV VF: No 00:27:05.890 Max Data Transfer Size: 131072 00:27:05.890 Max Number of Namespaces: 32 00:27:05.890 Max Number of I/O Queues: 127 00:27:05.890 NVMe Specification Version (VS): 1.3 00:27:05.890 NVMe Specification Version (Identify): 1.3 00:27:05.890 Maximum Queue Entries: 128 00:27:05.890 Contiguous Queues Required: Yes 00:27:05.891 Arbitration Mechanisms Supported 00:27:05.891 Weighted Round Robin: Not Supported 00:27:05.891 Vendor Specific: Not Supported 00:27:05.891 Reset Timeout: 15000 ms 00:27:05.891 Doorbell Stride: 4 bytes 00:27:05.891 NVM Subsystem Reset: Not Supported 00:27:05.891 Command Sets Supported 00:27:05.891 NVM Command Set: Supported 00:27:05.891 Boot Partition: Not Supported 00:27:05.891 Memory Page Size Minimum: 4096 bytes 00:27:05.891 Memory Page Size Maximum: 4096 bytes 00:27:05.891 Persistent Memory Region: Not Supported 00:27:05.891 Optional Asynchronous Events Supported 00:27:05.891 Namespace Attribute Notices: Supported 00:27:05.891 Firmware Activation Notices: Not Supported 00:27:05.891 ANA Change Notices: Not Supported 00:27:05.891 PLE Aggregate Log Change Notices: Not Supported 00:27:05.891 LBA Status Info Alert Notices: Not Supported 00:27:05.891 EGE Aggregate Log Change Notices: Not Supported 00:27:05.891 Normal NVM Subsystem Shutdown event: Not Supported 00:27:05.891 Zone Descriptor Change Notices: Not Supported 00:27:05.891 Discovery Log Change Notices: Not Supported 00:27:05.891 Controller Attributes 00:27:05.891 128-bit Host Identifier: Supported 00:27:05.891 Non-Operational Permissive Mode: Not Supported 00:27:05.891 NVM Sets: Not Supported 00:27:05.891 Read Recovery Levels: Not Supported 00:27:05.891 Endurance Groups: Not Supported 00:27:05.891 Predictable Latency Mode: Not Supported 00:27:05.891 Traffic Based Keep ALive: Not Supported 00:27:05.891 Namespace Granularity: Not Supported 00:27:05.891 SQ Associations: Not Supported 00:27:05.891 UUID List: Not Supported 00:27:05.891 Multi-Domain Subsystem: Not Supported 00:27:05.891 Fixed Capacity Management: Not Supported 00:27:05.891 Variable Capacity Management: Not Supported 00:27:05.891 Delete Endurance Group: Not Supported 00:27:05.891 Delete NVM Set: Not Supported 00:27:05.891 Extended LBA Formats Supported: Not Supported 00:27:05.891 Flexible Data Placement Supported: Not Supported 00:27:05.891 00:27:05.891 Controller Memory Buffer Support 00:27:05.891 ================================ 00:27:05.891 Supported: No 00:27:05.891 00:27:05.891 Persistent Memory Region Support 00:27:05.891 ================================ 00:27:05.891 Supported: No 00:27:05.891 00:27:05.891 Admin Command Set Attributes 00:27:05.891 ============================ 00:27:05.891 Security Send/Receive: Not Supported 00:27:05.891 Format NVM: Not Supported 00:27:05.891 Firmware Activate/Download: Not Supported 00:27:05.891 Namespace Management: Not Supported 00:27:05.891 Device Self-Test: Not Supported 00:27:05.891 Directives: Not Supported 00:27:05.891 NVMe-MI: Not Supported 00:27:05.891 Virtualization Management: Not Supported 00:27:05.891 Doorbell Buffer Config: Not Supported 00:27:05.891 Get LBA Status Capability: Not Supported 00:27:05.891 Command & Feature Lockdown Capability: Not Supported 00:27:05.891 Abort Command Limit: 4 00:27:05.891 Async Event Request Limit: 4 00:27:05.891 Number of Firmware Slots: N/A 00:27:05.891 Firmware Slot 1 Read-Only: N/A 00:27:05.891 Firmware Activation Without Reset: N/A 00:27:05.891 Multiple Update Detection Support: N/A 00:27:05.891 Firmware Update Granularity: No Information Provided 00:27:05.891 Per-Namespace SMART Log: No 00:27:05.891 Asymmetric Namespace Access Log Page: Not Supported 00:27:05.891 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:05.891 Command Effects Log Page: Supported 00:27:05.891 Get Log Page Extended Data: Supported 00:27:05.891 Telemetry Log Pages: Not Supported 00:27:05.891 Persistent Event Log Pages: Not Supported 00:27:05.891 Supported Log Pages Log Page: May Support 00:27:05.891 Commands Supported & Effects Log Page: Not Supported 00:27:05.891 Feature Identifiers & Effects Log Page:May Support 00:27:05.891 NVMe-MI Commands & Effects Log Page: May Support 00:27:05.891 Data Area 4 for Telemetry Log: Not Supported 00:27:05.891 Error Log Page Entries Supported: 128 00:27:05.891 Keep Alive: Supported 00:27:05.891 Keep Alive Granularity: 10000 ms 00:27:05.891 00:27:05.891 NVM Command Set Attributes 00:27:05.891 ========================== 00:27:05.891 Submission Queue Entry Size 00:27:05.891 Max: 64 00:27:05.891 Min: 64 00:27:05.891 Completion Queue Entry Size 00:27:05.891 Max: 16 00:27:05.891 Min: 16 00:27:05.891 Number of Namespaces: 32 00:27:05.891 Compare Command: Supported 00:27:05.891 Write Uncorrectable Command: Not Supported 00:27:05.891 Dataset Management Command: Supported 00:27:05.891 Write Zeroes Command: Supported 00:27:05.891 Set Features Save Field: Not Supported 00:27:05.891 Reservations: Supported 00:27:05.891 Timestamp: Not Supported 00:27:05.891 Copy: Supported 00:27:05.891 Volatile Write Cache: Present 00:27:05.891 Atomic Write Unit (Normal): 1 00:27:05.891 Atomic Write Unit (PFail): 1 00:27:05.891 Atomic Compare & Write Unit: 1 00:27:05.891 Fused Compare & Write: Supported 00:27:05.891 Scatter-Gather List 00:27:05.891 SGL Command Set: Supported 00:27:05.891 SGL Keyed: Supported 00:27:05.891 SGL Bit Bucket Descriptor: Not Supported 00:27:05.891 SGL Metadata Pointer: Not Supported 00:27:05.891 Oversized SGL: Not Supported 00:27:05.891 SGL Metadata Address: Not Supported 00:27:05.891 SGL Offset: Supported 00:27:05.891 Transport SGL Data Block: Not Supported 00:27:05.891 Replay Protected Memory Block: Not Supported 00:27:05.891 00:27:05.891 Firmware Slot Information 00:27:05.891 ========================= 00:27:05.891 Active slot: 1 00:27:05.891 Slot 1 Firmware Revision: 24.09 00:27:05.891 00:27:05.891 00:27:05.891 Commands Supported and Effects 00:27:05.891 ============================== 00:27:05.891 Admin Commands 00:27:05.891 -------------- 00:27:05.891 Get Log Page (02h): Supported 00:27:05.891 Identify (06h): Supported 00:27:05.891 Abort (08h): Supported 00:27:05.891 Set Features (09h): Supported 00:27:05.891 Get Features (0Ah): Supported 00:27:05.891 Asynchronous Event Request (0Ch): Supported 00:27:05.891 Keep Alive (18h): Supported 00:27:05.891 I/O Commands 00:27:05.891 ------------ 00:27:05.891 Flush (00h): Supported LBA-Change 00:27:05.891 Write (01h): Supported LBA-Change 00:27:05.891 Read (02h): Supported 00:27:05.891 Compare (05h): Supported 00:27:05.891 Write Zeroes (08h): Supported LBA-Change 00:27:05.891 Dataset Management (09h): Supported LBA-Change 00:27:05.891 Copy (19h): Supported LBA-Change 00:27:05.891 00:27:05.891 Error Log 00:27:05.891 ========= 00:27:05.891 00:27:05.891 Arbitration 00:27:05.891 =========== 00:27:05.891 Arbitration Burst: 1 00:27:05.891 00:27:05.891 Power Management 00:27:05.891 ================ 00:27:05.891 Number of Power States: 1 00:27:05.891 Current Power State: Power State #0 00:27:05.891 Power State #0: 00:27:05.891 Max Power: 0.00 W 00:27:05.891 Non-Operational State: Operational 00:27:05.891 Entry Latency: Not Reported 00:27:05.891 Exit Latency: Not Reported 00:27:05.891 Relative Read Throughput: 0 00:27:05.891 Relative Read Latency: 0 00:27:05.891 Relative Write Throughput: 0 00:27:05.891 Relative Write Latency: 0 00:27:05.891 Idle Power: Not Reported 00:27:05.891 Active Power: Not Reported 00:27:05.891 Non-Operational Permissive Mode: Not Supported 00:27:05.891 00:27:05.891 Health Information 00:27:05.891 ================== 00:27:05.891 Critical Warnings: 00:27:05.891 Available Spare Space: OK 00:27:05.891 Temperature: OK 00:27:05.891 Device Reliability: OK 00:27:05.891 Read Only: No 00:27:05.891 Volatile Memory Backup: OK 00:27:05.891 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:05.891 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:05.891 Available Spare: 0% 00:27:05.891 Available Spare Threshold: 0% 00:27:05.891 Life Percentage Used:[2024-07-25 23:33:03.473565] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.473577] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x139c630) 00:27:05.891 [2024-07-25 23:33:03.473587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.891 [2024-07-25 23:33:03.473608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eba00, cid 7, qid 0 00:27:05.891 [2024-07-25 23:33:03.473826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.891 [2024-07-25 23:33:03.473841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.891 [2024-07-25 23:33:03.473849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.473856] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eba00) on tqpair=0x139c630 00:27:05.891 [2024-07-25 23:33:03.473899] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:05.891 [2024-07-25 23:33:03.473918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eaf80) on tqpair=0x139c630 00:27:05.891 [2024-07-25 23:33:03.473929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.891 [2024-07-25 23:33:03.473938] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb100) on tqpair=0x139c630 00:27:05.891 [2024-07-25 23:33:03.473946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.891 [2024-07-25 23:33:03.473955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb280) on tqpair=0x139c630 00:27:05.891 [2024-07-25 23:33:03.473963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.891 [2024-07-25 23:33:03.473971] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb400) on tqpair=0x139c630 00:27:05.891 [2024-07-25 23:33:03.473994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.891 [2024-07-25 23:33:03.474007] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.474015] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.474021] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139c630) 00:27:05.891 [2024-07-25 23:33:03.474032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.891 [2024-07-25 23:33:03.474078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb400, cid 3, qid 0 00:27:05.891 [2024-07-25 23:33:03.474218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.891 [2024-07-25 23:33:03.474231] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.891 [2024-07-25 23:33:03.474238] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.474245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb400) on tqpair=0x139c630 00:27:05.891 [2024-07-25 23:33:03.474256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.474264] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.474274] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139c630) 00:27:05.891 [2024-07-25 23:33:03.474286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.891 [2024-07-25 23:33:03.474312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb400, cid 3, qid 0 00:27:05.891 [2024-07-25 23:33:03.474427] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.891 [2024-07-25 23:33:03.474442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.891 [2024-07-25 23:33:03.474449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.474456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb400) on tqpair=0x139c630 00:27:05.891 [2024-07-25 23:33:03.474464] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:05.891 [2024-07-25 23:33:03.474472] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:05.891 [2024-07-25 23:33:03.474489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.474498] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.474505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139c630) 00:27:05.891 [2024-07-25 23:33:03.474515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.891 [2024-07-25 23:33:03.474536] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb400, cid 3, qid 0 00:27:05.891 [2024-07-25 23:33:03.474630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.891 [2024-07-25 23:33:03.474645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.891 [2024-07-25 23:33:03.474652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.474659] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb400) on tqpair=0x139c630 00:27:05.891 [2024-07-25 23:33:03.474676] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.474686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.474692] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139c630) 00:27:05.891 [2024-07-25 23:33:03.474703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.891 [2024-07-25 23:33:03.474723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb400, cid 3, qid 0 00:27:05.891 [2024-07-25 23:33:03.474825] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.891 [2024-07-25 23:33:03.474837] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.891 [2024-07-25 23:33:03.474844] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.474851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb400) on tqpair=0x139c630 00:27:05.891 [2024-07-25 23:33:03.474867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.474876] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.474883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139c630) 00:27:05.891 [2024-07-25 23:33:03.474894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.891 [2024-07-25 23:33:03.474914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb400, cid 3, qid 0 00:27:05.891 [2024-07-25 23:33:03.475008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.891 [2024-07-25 23:33:03.475023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.891 [2024-07-25 23:33:03.475030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.475037] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb400) on tqpair=0x139c630 00:27:05.891 [2024-07-25 23:33:03.475053] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.479093] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.479104] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139c630) 00:27:05.891 [2024-07-25 23:33:03.479115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.891 [2024-07-25 23:33:03.479138] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13eb400, cid 3, qid 0 00:27:05.891 [2024-07-25 23:33:03.479280] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:05.891 [2024-07-25 23:33:03.479295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:05.891 [2024-07-25 23:33:03.479303] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:05.891 [2024-07-25 23:33:03.479310] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13eb400) on tqpair=0x139c630 00:27:05.891 [2024-07-25 23:33:03.479323] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:27:05.891 0% 00:27:05.891 Data Units Read: 0 00:27:05.891 Data Units Written: 0 00:27:05.892 Host Read Commands: 0 00:27:05.892 Host Write Commands: 0 00:27:05.892 Controller Busy Time: 0 minutes 00:27:05.892 Power Cycles: 0 00:27:05.892 Power On Hours: 0 hours 00:27:05.892 Unsafe Shutdowns: 0 00:27:05.892 Unrecoverable Media Errors: 0 00:27:05.892 Lifetime Error Log Entries: 0 00:27:05.892 Warning Temperature Time: 0 minutes 00:27:05.892 Critical Temperature Time: 0 minutes 00:27:05.892 00:27:05.892 Number of Queues 00:27:05.892 ================ 00:27:05.892 Number of I/O Submission Queues: 127 00:27:05.892 Number of I/O Completion Queues: 127 00:27:05.892 00:27:05.892 Active Namespaces 00:27:05.892 ================= 00:27:05.892 Namespace ID:1 00:27:05.892 Error Recovery Timeout: Unlimited 00:27:05.892 Command Set Identifier: NVM (00h) 00:27:05.892 Deallocate: Supported 00:27:05.892 Deallocated/Unwritten Error: Not Supported 00:27:05.892 Deallocated Read Value: Unknown 00:27:05.892 Deallocate in Write Zeroes: Not Supported 00:27:05.892 Deallocated Guard Field: 0xFFFF 00:27:05.892 Flush: Supported 00:27:05.892 Reservation: Supported 00:27:05.892 Namespace Sharing Capabilities: Multiple Controllers 00:27:05.892 Size (in LBAs): 131072 (0GiB) 00:27:05.892 Capacity (in LBAs): 131072 (0GiB) 00:27:05.892 Utilization (in LBAs): 131072 (0GiB) 00:27:05.892 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:05.892 EUI64: ABCDEF0123456789 00:27:05.892 UUID: 6b46fc77-5aa5-4a82-b6aa-5e2550dd8119 00:27:05.892 Thin Provisioning: Not Supported 00:27:05.892 Per-NS Atomic Units: Yes 00:27:05.892 Atomic Boundary Size (Normal): 0 00:27:05.892 Atomic Boundary Size (PFail): 0 00:27:05.892 Atomic Boundary Offset: 0 00:27:05.892 Maximum Single Source Range Length: 65535 00:27:05.892 Maximum Copy Length: 65535 00:27:05.892 Maximum Source Range Count: 1 00:27:05.892 NGUID/EUI64 Never Reused: No 00:27:05.892 Namespace Write Protected: No 00:27:05.892 Number of LBA Formats: 1 00:27:05.892 Current LBA Format: LBA Format #00 00:27:05.892 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:05.892 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:05.892 rmmod nvme_tcp 00:27:05.892 rmmod nvme_fabrics 00:27:05.892 rmmod nvme_keyring 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1474026 ']' 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1474026 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1474026 ']' 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1474026 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1474026 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1474026' 00:27:05.892 killing process with pid 1474026 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1474026 00:27:05.892 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1474026 00:27:06.149 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:06.150 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:06.150 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:06.150 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:06.150 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:06.150 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.150 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.150 23:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:08.683 00:27:08.683 real 0m5.469s 00:27:08.683 user 0m4.689s 00:27:08.683 sys 0m1.859s 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:08.683 ************************************ 00:27:08.683 END TEST nvmf_identify 00:27:08.683 ************************************ 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.683 ************************************ 00:27:08.683 START TEST nvmf_perf 00:27:08.683 ************************************ 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:08.683 * Looking for test storage... 00:27:08.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:08.683 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:08.684 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.684 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.684 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:08.684 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:08.684 23:33:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:08.684 23:33:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:08.684 23:33:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:08.684 23:33:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:08.684 23:33:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:08.684 23:33:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:08.684 23:33:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:08.684 23:33:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:08.684 23:33:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:08.684 23:33:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:08.684 23:33:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.684 23:33:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.684 23:33:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.684 23:33:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:08.684 23:33:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:08.684 23:33:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:08.684 23:33:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:10.585 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:10.585 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:10.585 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:10.585 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:10.586 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:10.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:27:10.586 00:27:10.586 --- 10.0.0.2 ping statistics --- 00:27:10.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.586 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:27:10.586 00:27:10.586 --- 10.0.0.1 ping statistics --- 00:27:10.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.586 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1476386 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1476386 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1476386 ']' 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:10.586 23:33:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:10.586 [2024-07-25 23:33:08.022122] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:27:10.586 [2024-07-25 23:33:08.022204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.586 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.586 [2024-07-25 23:33:08.060876] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:10.586 [2024-07-25 23:33:08.089171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.586 [2024-07-25 23:33:08.182988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.586 [2024-07-25 23:33:08.183042] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.586 [2024-07-25 23:33:08.183056] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.586 [2024-07-25 23:33:08.183090] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.586 [2024-07-25 23:33:08.183102] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.586 [2024-07-25 23:33:08.183154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.586 [2024-07-25 23:33:08.183213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.586 [2024-07-25 23:33:08.183280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:10.586 [2024-07-25 23:33:08.183282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.844 23:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:10.844 23:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:27:10.844 23:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:10.844 23:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:10.844 23:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:10.844 23:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.844 23:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:10.844 23:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:14.125 23:33:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:14.125 23:33:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:14.125 23:33:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:27:14.125 23:33:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:14.385 23:33:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:14.385 23:33:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:27:14.385 23:33:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:14.385 23:33:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:14.385 23:33:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:14.642 [2024-07-25 23:33:12.211965] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.642 23:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:14.900 23:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:14.900 23:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:15.158 23:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:15.158 23:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:15.416 23:33:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:15.674 [2024-07-25 23:33:13.191526] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.674 23:33:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:15.932 23:33:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:27:15.932 23:33:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:15.932 23:33:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:15.932 23:33:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:17.304 Initializing NVMe Controllers 00:27:17.304 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:27:17.304 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:27:17.305 Initialization complete. Launching workers. 00:27:17.305 ======================================================== 00:27:17.305 Latency(us) 00:27:17.305 Device Information : IOPS MiB/s Average min max 00:27:17.305 PCIE (0000:88:00.0) NSID 1 from core 0: 85827.74 335.26 372.29 10.77 4342.74 00:27:17.305 ======================================================== 00:27:17.305 Total : 85827.74 335.26 372.29 10.77 4342.74 00:27:17.305 00:27:17.305 23:33:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:17.305 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.240 Initializing NVMe Controllers 00:27:18.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:18.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:18.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:18.240 Initialization complete. Launching workers. 00:27:18.240 ======================================================== 00:27:18.240 Latency(us) 00:27:18.240 Device Information : IOPS MiB/s Average min max 00:27:18.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 131.00 0.51 7751.79 159.33 45843.08 00:27:18.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 52.00 0.20 19528.41 7955.53 50859.65 00:27:18.240 ======================================================== 00:27:18.240 Total : 183.00 0.71 11098.16 159.33 50859.65 00:27:18.240 00:27:18.241 23:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:18.241 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.648 Initializing NVMe Controllers 00:27:19.648 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:19.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:19.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:19.648 Initialization complete. Launching workers. 00:27:19.648 ======================================================== 00:27:19.648 Latency(us) 00:27:19.648 Device Information : IOPS MiB/s Average min max 00:27:19.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8589.99 33.55 3726.15 412.97 7514.85 00:27:19.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3875.00 15.14 8302.39 6809.15 16053.93 00:27:19.648 ======================================================== 00:27:19.648 Total : 12464.99 48.69 5148.77 412.97 16053.93 00:27:19.648 00:27:19.648 23:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:19.648 23:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:19.648 23:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:19.648 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.183 Initializing NVMe Controllers 00:27:22.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:22.183 Controller IO queue size 128, less than required. 00:27:22.183 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:22.183 Controller IO queue size 128, less than required. 00:27:22.183 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:22.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:22.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:22.183 Initialization complete. Launching workers. 00:27:22.183 ======================================================== 00:27:22.183 Latency(us) 00:27:22.183 Device Information : IOPS MiB/s Average min max 00:27:22.183 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1501.51 375.38 86599.28 53410.99 127266.36 00:27:22.183 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 593.41 148.35 228703.83 111950.35 327469.29 00:27:22.183 ======================================================== 00:27:22.183 Total : 2094.92 523.73 126852.07 53410.99 327469.29 00:27:22.183 00:27:22.183 23:33:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:22.183 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.440 No valid NVMe controllers or AIO or URING devices found 00:27:22.440 Initializing NVMe Controllers 00:27:22.440 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:22.440 Controller IO queue size 128, less than required. 00:27:22.440 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:22.440 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:22.440 Controller IO queue size 128, less than required. 00:27:22.440 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:22.440 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:22.440 WARNING: Some requested NVMe devices were skipped 00:27:22.440 23:33:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:22.440 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.972 Initializing NVMe Controllers 00:27:24.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:24.972 Controller IO queue size 128, less than required. 00:27:24.972 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:24.972 Controller IO queue size 128, less than required. 00:27:24.972 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:24.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:24.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:24.972 Initialization complete. Launching workers. 00:27:24.972 00:27:24.972 ==================== 00:27:24.972 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:24.972 TCP transport: 00:27:24.972 polls: 9574 00:27:24.972 idle_polls: 5546 00:27:24.972 sock_completions: 4028 00:27:24.972 nvme_completions: 5923 00:27:24.972 submitted_requests: 8854 00:27:24.972 queued_requests: 1 00:27:24.972 00:27:24.972 ==================== 00:27:24.972 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:24.972 TCP transport: 00:27:24.972 polls: 7428 00:27:24.972 idle_polls: 2487 00:27:24.972 sock_completions: 4941 00:27:24.972 nvme_completions: 6197 00:27:24.972 submitted_requests: 9226 00:27:24.972 queued_requests: 1 00:27:24.972 ======================================================== 00:27:24.972 Latency(us) 00:27:24.972 Device Information : IOPS MiB/s Average min max 00:27:24.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1477.32 369.33 88191.58 59253.85 149635.69 00:27:24.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1545.67 386.42 84038.09 40691.97 131656.07 00:27:24.972 ======================================================== 00:27:24.972 Total : 3023.00 755.75 86067.88 40691.97 149635.69 00:27:24.972 00:27:24.972 23:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:24.972 23:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:25.229 23:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:25.229 23:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:27:25.229 23:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:28.518 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=97676fe5-4342-4635-b6b9-30649adef7e6 00:27:28.518 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 97676fe5-4342-4635-b6b9-30649adef7e6 00:27:28.518 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=97676fe5-4342-4635-b6b9-30649adef7e6 00:27:28.518 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:28.518 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:28.518 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:28.518 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:28.774 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:28.774 { 00:27:28.774 "uuid": "97676fe5-4342-4635-b6b9-30649adef7e6", 00:27:28.774 "name": "lvs_0", 00:27:28.774 "base_bdev": "Nvme0n1", 00:27:28.774 "total_data_clusters": 238234, 00:27:28.774 "free_clusters": 238234, 00:27:28.774 "block_size": 512, 00:27:28.774 "cluster_size": 4194304 00:27:28.774 } 00:27:28.774 ]' 00:27:28.774 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="97676fe5-4342-4635-b6b9-30649adef7e6") .free_clusters' 00:27:28.774 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:27:28.774 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="97676fe5-4342-4635-b6b9-30649adef7e6") .cluster_size' 00:27:28.774 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:28.774 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:27:28.774 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:27:28.774 952936 00:27:28.774 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:27:28.774 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:27:28.775 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 97676fe5-4342-4635-b6b9-30649adef7e6 lbd_0 20480 00:27:29.340 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=3642a554-e755-4446-b2b6-41604f88dab7 00:27:29.340 23:33:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 3642a554-e755-4446-b2b6-41604f88dab7 lvs_n_0 00:27:30.271 23:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=94cb1064-bf65-46d0-9f1f-93a1e1f7f0fd 00:27:30.271 23:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 94cb1064-bf65-46d0-9f1f-93a1e1f7f0fd 00:27:30.271 23:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=94cb1064-bf65-46d0-9f1f-93a1e1f7f0fd 00:27:30.272 23:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:27:30.272 23:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:27:30.272 23:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:27:30.272 23:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:30.272 23:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:27:30.272 { 00:27:30.272 "uuid": "97676fe5-4342-4635-b6b9-30649adef7e6", 00:27:30.272 "name": "lvs_0", 00:27:30.272 "base_bdev": "Nvme0n1", 00:27:30.272 "total_data_clusters": 238234, 00:27:30.272 "free_clusters": 233114, 00:27:30.272 "block_size": 512, 00:27:30.272 "cluster_size": 4194304 00:27:30.272 }, 00:27:30.272 { 00:27:30.272 "uuid": "94cb1064-bf65-46d0-9f1f-93a1e1f7f0fd", 00:27:30.272 "name": "lvs_n_0", 00:27:30.272 "base_bdev": "3642a554-e755-4446-b2b6-41604f88dab7", 00:27:30.272 "total_data_clusters": 5114, 00:27:30.272 "free_clusters": 5114, 00:27:30.272 "block_size": 512, 00:27:30.272 "cluster_size": 4194304 00:27:30.272 } 00:27:30.272 ]' 00:27:30.272 23:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="94cb1064-bf65-46d0-9f1f-93a1e1f7f0fd") .free_clusters' 00:27:30.272 23:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:27:30.272 23:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="94cb1064-bf65-46d0-9f1f-93a1e1f7f0fd") .cluster_size' 00:27:30.272 23:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:27:30.272 23:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:27:30.272 23:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:27:30.272 20456 00:27:30.272 23:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:30.272 23:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 94cb1064-bf65-46d0-9f1f-93a1e1f7f0fd lbd_nest_0 20456 00:27:30.530 23:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=5472b857-f5d2-485a-b74e-f87d0ef2d590 00:27:30.530 23:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:30.787 23:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:30.787 23:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 5472b857-f5d2-485a-b74e-f87d0ef2d590 00:27:31.045 23:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.305 23:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:31.305 23:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:31.305 23:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:31.305 23:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:31.305 23:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:31.305 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.513 Initializing NVMe Controllers 00:27:43.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:43.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:43.513 Initialization complete. Launching workers. 00:27:43.513 ======================================================== 00:27:43.513 Latency(us) 00:27:43.513 Device Information : IOPS MiB/s Average min max 00:27:43.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.70 0.02 20189.40 189.31 44874.60 00:27:43.513 ======================================================== 00:27:43.513 Total : 49.70 0.02 20189.40 189.31 44874.60 00:27:43.513 00:27:43.513 23:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:43.513 23:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:43.513 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.487 Initializing NVMe Controllers 00:27:53.487 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:53.487 Initialization complete. Launching workers. 00:27:53.487 ======================================================== 00:27:53.487 Latency(us) 00:27:53.487 Device Information : IOPS MiB/s Average min max 00:27:53.487 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 75.38 9.42 13275.91 4985.45 47899.56 00:27:53.487 ======================================================== 00:27:53.487 Total : 75.38 9.42 13275.91 4985.45 47899.56 00:27:53.487 00:27:53.487 23:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:53.487 23:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:53.487 23:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:53.487 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.500 Initializing NVMe Controllers 00:28:03.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:03.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:03.500 Initialization complete. Launching workers. 00:28:03.500 ======================================================== 00:28:03.500 Latency(us) 00:28:03.500 Device Information : IOPS MiB/s Average min max 00:28:03.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7472.13 3.65 4282.29 270.79 15089.49 00:28:03.500 ======================================================== 00:28:03.500 Total : 7472.13 3.65 4282.29 270.79 15089.49 00:28:03.500 00:28:03.500 23:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:03.500 23:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:03.500 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.474 Initializing NVMe Controllers 00:28:13.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:13.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:13.474 Initialization complete. Launching workers. 00:28:13.474 ======================================================== 00:28:13.474 Latency(us) 00:28:13.474 Device Information : IOPS MiB/s Average min max 00:28:13.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2727.60 340.95 11739.33 535.16 26645.50 00:28:13.474 ======================================================== 00:28:13.474 Total : 2727.60 340.95 11739.33 535.16 26645.50 00:28:13.474 00:28:13.474 23:34:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:13.474 23:34:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:13.474 23:34:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:13.474 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.451 Initializing NVMe Controllers 00:28:23.451 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:23.451 Controller IO queue size 128, less than required. 00:28:23.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:23.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:23.451 Initialization complete. Launching workers. 00:28:23.451 ======================================================== 00:28:23.451 Latency(us) 00:28:23.451 Device Information : IOPS MiB/s Average min max 00:28:23.451 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11910.74 5.82 10751.01 1745.18 29897.27 00:28:23.451 ======================================================== 00:28:23.451 Total : 11910.74 5.82 10751.01 1745.18 29897.27 00:28:23.451 00:28:23.451 23:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:23.451 23:34:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:23.451 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.652 Initializing NVMe Controllers 00:28:35.652 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.652 Controller IO queue size 128, less than required. 00:28:35.652 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:35.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:35.652 Initialization complete. Launching workers. 00:28:35.652 ======================================================== 00:28:35.652 Latency(us) 00:28:35.652 Device Information : IOPS MiB/s Average min max 00:28:35.652 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1205.90 150.74 106455.83 31304.52 211065.05 00:28:35.652 ======================================================== 00:28:35.652 Total : 1205.90 150.74 106455.83 31304.52 211065.05 00:28:35.652 00:28:35.652 23:34:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:35.652 23:34:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5472b857-f5d2-485a-b74e-f87d0ef2d590 00:28:35.652 23:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:35.652 23:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3642a554-e755-4446-b2b6-41604f88dab7 00:28:35.652 23:34:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:35.652 rmmod nvme_tcp 00:28:35.652 rmmod nvme_fabrics 00:28:35.652 rmmod nvme_keyring 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1476386 ']' 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1476386 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1476386 ']' 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1476386 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1476386 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1476386' 00:28:35.652 killing process with pid 1476386 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1476386 00:28:35.652 23:34:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1476386 00:28:37.028 23:34:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:37.028 23:34:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:37.028 23:34:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:37.028 23:34:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:37.028 23:34:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:37.028 23:34:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.028 23:34:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.028 23:34:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:39.566 00:28:39.566 real 1m30.771s 00:28:39.566 user 5m35.202s 00:28:39.566 sys 0m16.212s 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:39.566 ************************************ 00:28:39.566 END TEST nvmf_perf 00:28:39.566 ************************************ 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.566 ************************************ 00:28:39.566 START TEST nvmf_fio_host 00:28:39.566 ************************************ 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:39.566 * Looking for test storage... 00:28:39.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:39.566 23:34:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:41.504 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:41.504 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:41.504 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:41.505 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:41.505 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:41.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:28:41.505 00:28:41.505 --- 10.0.0.2 ping statistics --- 00:28:41.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.505 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:28:41.505 00:28:41.505 --- 10.0.0.1 ping statistics --- 00:28:41.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.505 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1488451 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1488451 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1488451 ']' 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:41.505 23:34:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.505 [2024-07-25 23:34:38.918869] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:28:41.505 [2024-07-25 23:34:38.918952] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.505 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.505 [2024-07-25 23:34:38.958868] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:41.505 [2024-07-25 23:34:38.985819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:41.505 [2024-07-25 23:34:39.071546] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.505 [2024-07-25 23:34:39.071600] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.505 [2024-07-25 23:34:39.071614] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.505 [2024-07-25 23:34:39.071626] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.505 [2024-07-25 23:34:39.071635] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.505 [2024-07-25 23:34:39.071759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.505 [2024-07-25 23:34:39.071891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:41.505 [2024-07-25 23:34:39.071939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:41.505 [2024-07-25 23:34:39.071942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.505 23:34:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:41.506 23:34:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:28:41.506 23:34:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:41.764 [2024-07-25 23:34:39.410784] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.764 23:34:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:41.764 23:34:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:41.764 23:34:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.764 23:34:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:42.042 Malloc1 00:28:42.043 23:34:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:42.303 23:34:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:42.561 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:42.818 [2024-07-25 23:34:40.481988] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.818 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:43.076 23:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:43.334 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:43.334 fio-3.35 00:28:43.334 Starting 1 thread 00:28:43.334 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.871 [2024-07-25 23:34:43.301180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e6c80 is same with the state(5) to be set 00:28:45.871 [2024-07-25 23:34:43.301273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e6c80 is same with the state(5) to be set 00:28:45.871 [2024-07-25 23:34:43.301296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e6c80 is same with the state(5) to be set 00:28:45.871 [2024-07-25 23:34:43.301309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e6c80 is same with the state(5) to be set 00:28:45.871 [2024-07-25 23:34:43.301322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e6c80 is same with the state(5) to be set 00:28:45.871 00:28:45.872 test: (groupid=0, jobs=1): err= 0: pid=1488876: Thu Jul 25 23:34:43 2024 00:28:45.872 read: IOPS=8304, BW=32.4MiB/s (34.0MB/s)(65.1MiB/2007msec) 00:28:45.872 slat (nsec): min=1902, max=251684, avg=2575.62, stdev=2471.49 00:28:45.872 clat (usec): min=2906, max=13371, avg=8445.44, stdev=664.85 00:28:45.872 lat (usec): min=2936, max=13374, avg=8448.01, stdev=664.72 00:28:45.872 clat percentiles (usec): 00:28:45.872 | 1.00th=[ 6915], 5.00th=[ 7373], 10.00th=[ 7635], 20.00th=[ 7898], 00:28:45.872 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8586], 00:28:45.872 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9241], 95.00th=[ 9503], 00:28:45.872 | 99.00th=[ 9896], 99.50th=[10028], 99.90th=[12125], 99.95th=[12518], 00:28:45.872 | 99.99th=[13304] 00:28:45.872 bw ( KiB/s): min=31928, max=34008, per=99.95%, avg=33202.00, stdev=902.11, samples=4 00:28:45.872 iops : min= 7982, max= 8502, avg=8300.50, stdev=225.53, samples=4 00:28:45.872 write: IOPS=8306, BW=32.4MiB/s (34.0MB/s)(65.1MiB/2007msec); 0 zone resets 00:28:45.872 slat (nsec): min=1956, max=146901, avg=2664.19, stdev=1566.35 00:28:45.872 clat (usec): min=1589, max=12351, avg=6856.40, stdev=572.60 00:28:45.872 lat (usec): min=1598, max=12353, avg=6859.06, stdev=572.55 00:28:45.872 clat percentiles (usec): 00:28:45.872 | 1.00th=[ 5538], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6390], 00:28:45.872 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:28:45.872 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7504], 95.00th=[ 7701], 00:28:45.872 | 99.00th=[ 8094], 99.50th=[ 8291], 99.90th=[10683], 99.95th=[11207], 00:28:45.872 | 99.99th=[12256] 00:28:45.872 bw ( KiB/s): min=32776, max=33424, per=99.99%, avg=33222.00, stdev=300.00, samples=4 00:28:45.872 iops : min= 8194, max= 8356, avg=8305.50, stdev=75.00, samples=4 00:28:45.872 lat (msec) : 2=0.01%, 4=0.11%, 10=99.54%, 20=0.34% 00:28:45.872 cpu : usr=57.83%, sys=38.43%, ctx=90, majf=0, minf=40 00:28:45.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:45.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:45.872 issued rwts: total=16668,16671,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.872 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:45.872 00:28:45.872 Run status group 0 (all jobs): 00:28:45.872 READ: bw=32.4MiB/s (34.0MB/s), 32.4MiB/s-32.4MiB/s (34.0MB/s-34.0MB/s), io=65.1MiB (68.3MB), run=2007-2007msec 00:28:45.872 WRITE: bw=32.4MiB/s (34.0MB/s), 32.4MiB/s-32.4MiB/s (34.0MB/s-34.0MB/s), io=65.1MiB (68.3MB), run=2007-2007msec 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:45.872 23:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:45.872 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:45.872 fio-3.35 00:28:45.872 Starting 1 thread 00:28:45.872 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.396 00:28:48.396 test: (groupid=0, jobs=1): err= 0: pid=1489255: Thu Jul 25 23:34:45 2024 00:28:48.396 read: IOPS=8257, BW=129MiB/s (135MB/s)(259MiB/2011msec) 00:28:48.396 slat (usec): min=2, max=111, avg= 3.70, stdev= 1.65 00:28:48.396 clat (usec): min=2759, max=17817, avg=8999.11, stdev=2185.64 00:28:48.396 lat (usec): min=2763, max=17820, avg=9002.82, stdev=2185.70 00:28:48.396 clat percentiles (usec): 00:28:48.396 | 1.00th=[ 4686], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 7111], 00:28:48.396 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 8979], 60.00th=[ 9503], 00:28:48.396 | 70.00th=[10159], 80.00th=[10945], 90.00th=[11600], 95.00th=[12649], 00:28:48.396 | 99.00th=[15008], 99.50th=[15401], 99.90th=[16581], 99.95th=[17433], 00:28:48.396 | 99.99th=[17695] 00:28:48.396 bw ( KiB/s): min=60160, max=77413, per=51.45%, avg=67977.25, stdev=7317.70, samples=4 00:28:48.396 iops : min= 3760, max= 4838, avg=4248.50, stdev=457.22, samples=4 00:28:48.396 write: IOPS=4705, BW=73.5MiB/s (77.1MB/s)(139MiB/1884msec); 0 zone resets 00:28:48.396 slat (usec): min=30, max=146, avg=33.53, stdev= 5.12 00:28:48.396 clat (usec): min=5156, max=19499, avg=11515.82, stdev=2058.85 00:28:48.396 lat (usec): min=5187, max=19536, avg=11549.35, stdev=2058.82 00:28:48.396 clat percentiles (usec): 00:28:48.396 | 1.00th=[ 7504], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9765], 00:28:48.396 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:28:48.396 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14353], 95.00th=[15270], 00:28:48.396 | 99.00th=[16581], 99.50th=[16909], 99.90th=[18482], 99.95th=[18744], 00:28:48.396 | 99.99th=[19530] 00:28:48.396 bw ( KiB/s): min=63616, max=80510, per=93.69%, avg=70543.50, stdev=7265.48, samples=4 00:28:48.396 iops : min= 3976, max= 5031, avg=4408.75, stdev=453.69, samples=4 00:28:48.396 lat (msec) : 4=0.16%, 10=53.10%, 20=46.74% 00:28:48.396 cpu : usr=76.27%, sys=21.39%, ctx=52, majf=0, minf=56 00:28:48.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:48.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:48.396 issued rwts: total=16605,8866,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:48.396 00:28:48.396 Run status group 0 (all jobs): 00:28:48.396 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (272MB), run=2011-2011msec 00:28:48.396 WRITE: bw=73.5MiB/s (77.1MB/s), 73.5MiB/s-73.5MiB/s (77.1MB/s-77.1MB/s), io=139MiB (145MB), run=1884-1884msec 00:28:48.396 23:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:48.652 23:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:28:48.652 23:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:28:48.652 23:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:28:48.652 23:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:48.652 23:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:28:48.652 23:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:48.652 23:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:48.652 23:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:48.652 23:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:28:48.652 23:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:28:48.652 23:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:28:51.931 Nvme0n1 00:28:51.931 23:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:28:54.461 23:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=cd74a6f6-3114-458d-a628-95bab0d741d3 00:28:54.461 23:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb cd74a6f6-3114-458d-a628-95bab0d741d3 00:28:54.461 23:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=cd74a6f6-3114-458d-a628-95bab0d741d3 00:28:54.461 23:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:54.461 23:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:28:54.461 23:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:28:54.461 23:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:54.719 23:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:54.719 { 00:28:54.719 "uuid": "cd74a6f6-3114-458d-a628-95bab0d741d3", 00:28:54.719 "name": "lvs_0", 00:28:54.719 "base_bdev": "Nvme0n1", 00:28:54.719 "total_data_clusters": 930, 00:28:54.719 "free_clusters": 930, 00:28:54.719 "block_size": 512, 00:28:54.719 "cluster_size": 1073741824 00:28:54.719 } 00:28:54.719 ]' 00:28:54.719 23:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="cd74a6f6-3114-458d-a628-95bab0d741d3") .free_clusters' 00:28:54.719 23:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:28:54.719 23:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="cd74a6f6-3114-458d-a628-95bab0d741d3") .cluster_size' 00:28:54.977 23:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:28:54.977 23:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:28:54.977 23:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:28:54.977 952320 00:28:54.977 23:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:28:55.234 999327aa-394b-42b6-82ce-4689b2cd33b8 00:28:55.234 23:34:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:28:55.492 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:28:55.750 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:56.007 23:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:56.264 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:56.264 fio-3.35 00:28:56.264 Starting 1 thread 00:28:56.264 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.789 [2024-07-25 23:34:56.290346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x895500 is same with the state(5) to be set 00:28:58.789 [2024-07-25 23:34:56.290412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x895500 is same with the state(5) to be set 00:28:58.789 00:28:58.789 test: (groupid=0, jobs=1): err= 0: pid=1490540: Thu Jul 25 23:34:56 2024 00:28:58.789 read: IOPS=6164, BW=24.1MiB/s (25.2MB/s)(48.4MiB/2008msec) 00:28:58.789 slat (nsec): min=1889, max=136532, avg=2533.04, stdev=2046.17 00:28:58.789 clat (usec): min=796, max=171044, avg=11424.51, stdev=11496.09 00:28:58.789 lat (usec): min=799, max=171082, avg=11427.05, stdev=11496.32 00:28:58.789 clat percentiles (msec): 00:28:58.789 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 10], 00:28:58.789 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:28:58.789 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 12], 00:28:58.789 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:28:58.789 | 99.99th=[ 171] 00:28:58.789 bw ( KiB/s): min=17224, max=27176, per=99.86%, avg=24624.00, stdev=4934.13, samples=4 00:28:58.789 iops : min= 4306, max= 6794, avg=6156.00, stdev=1233.53, samples=4 00:28:58.789 write: IOPS=6152, BW=24.0MiB/s (25.2MB/s)(48.3MiB/2008msec); 0 zone resets 00:28:58.789 slat (usec): min=2, max=110, avg= 2.65, stdev= 1.54 00:28:58.789 clat (usec): min=401, max=169336, avg=9207.20, stdev=10802.28 00:28:58.789 lat (usec): min=404, max=169342, avg=9209.86, stdev=10802.52 00:28:58.789 clat percentiles (msec): 00:28:58.789 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:28:58.789 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:28:58.789 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:28:58.789 | 99.00th=[ 11], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 169], 00:28:58.789 | 99.99th=[ 169] 00:28:58.789 bw ( KiB/s): min=18280, max=26816, per=99.90%, avg=24586.00, stdev=4206.27, samples=4 00:28:58.789 iops : min= 4570, max= 6704, avg=6146.50, stdev=1051.57, samples=4 00:28:58.789 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:28:58.789 lat (msec) : 2=0.03%, 4=0.14%, 10=60.69%, 20=38.60%, 250=0.52% 00:28:58.789 cpu : usr=58.94%, sys=38.32%, ctx=118, majf=0, minf=40 00:28:58.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:28:58.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:58.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:58.789 issued rwts: total=12378,12354,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:58.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:58.789 00:28:58.789 Run status group 0 (all jobs): 00:28:58.789 READ: bw=24.1MiB/s (25.2MB/s), 24.1MiB/s-24.1MiB/s (25.2MB/s-25.2MB/s), io=48.4MiB (50.7MB), run=2008-2008msec 00:28:58.789 WRITE: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=48.3MiB (50.6MB), run=2008-2008msec 00:28:58.789 23:34:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:59.047 23:34:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:00.425 23:34:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=1299f381-5b79-4a6e-98e9-6a805f49441d 00:29:00.425 23:34:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 1299f381-5b79-4a6e-98e9-6a805f49441d 00:29:00.425 23:34:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=1299f381-5b79-4a6e-98e9-6a805f49441d 00:29:00.425 23:34:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:00.425 23:34:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:00.425 23:34:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:00.425 23:34:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:00.425 23:34:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:00.425 { 00:29:00.425 "uuid": "cd74a6f6-3114-458d-a628-95bab0d741d3", 00:29:00.425 "name": "lvs_0", 00:29:00.425 "base_bdev": "Nvme0n1", 00:29:00.425 "total_data_clusters": 930, 00:29:00.425 "free_clusters": 0, 00:29:00.425 "block_size": 512, 00:29:00.425 "cluster_size": 1073741824 00:29:00.425 }, 00:29:00.425 { 00:29:00.425 "uuid": "1299f381-5b79-4a6e-98e9-6a805f49441d", 00:29:00.425 "name": "lvs_n_0", 00:29:00.425 "base_bdev": "999327aa-394b-42b6-82ce-4689b2cd33b8", 00:29:00.425 "total_data_clusters": 237847, 00:29:00.425 "free_clusters": 237847, 00:29:00.425 "block_size": 512, 00:29:00.425 "cluster_size": 4194304 00:29:00.425 } 00:29:00.425 ]' 00:29:00.425 23:34:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="1299f381-5b79-4a6e-98e9-6a805f49441d") .free_clusters' 00:29:00.425 23:34:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:29:00.425 23:34:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="1299f381-5b79-4a6e-98e9-6a805f49441d") .cluster_size' 00:29:00.425 23:34:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:00.425 23:34:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:29:00.425 23:34:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:29:00.425 951388 00:29:00.425 23:34:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:00.991 f8d163a7-51c1-4de8-972c-61cda9bc7a60 00:29:01.250 23:34:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:01.250 23:34:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:01.540 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:01.798 23:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:02.056 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:02.056 fio-3.35 00:29:02.056 Starting 1 thread 00:29:02.056 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.586 00:29:04.586 test: (groupid=0, jobs=1): err= 0: pid=1491278: Thu Jul 25 23:35:02 2024 00:29:04.586 read: IOPS=5840, BW=22.8MiB/s (23.9MB/s)(45.8MiB/2009msec) 00:29:04.586 slat (nsec): min=1937, max=162572, avg=2657.69, stdev=2292.55 00:29:04.586 clat (usec): min=4561, max=19889, avg=12089.32, stdev=1037.70 00:29:04.586 lat (usec): min=4566, max=19891, avg=12091.97, stdev=1037.56 00:29:04.586 clat percentiles (usec): 00:29:04.586 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[10814], 20.00th=[11338], 00:29:04.586 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:29:04.586 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:29:04.586 | 99.00th=[14484], 99.50th=[14746], 99.90th=[17433], 99.95th=[18744], 00:29:04.586 | 99.99th=[19792] 00:29:04.586 bw ( KiB/s): min=21908, max=24048, per=99.82%, avg=23321.00, stdev=959.44, samples=4 00:29:04.586 iops : min= 5477, max= 6012, avg=5830.25, stdev=239.86, samples=4 00:29:04.586 write: IOPS=5828, BW=22.8MiB/s (23.9MB/s)(45.7MiB/2009msec); 0 zone resets 00:29:04.586 slat (usec): min=2, max=108, avg= 2.77, stdev= 1.74 00:29:04.586 clat (usec): min=2146, max=17266, avg=9675.30, stdev=902.77 00:29:04.586 lat (usec): min=2153, max=17269, avg=9678.08, stdev=902.72 00:29:04.586 clat percentiles (usec): 00:29:04.586 | 1.00th=[ 7635], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8979], 00:29:04.586 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:29:04.586 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10683], 95.00th=[11076], 00:29:04.586 | 99.00th=[11600], 99.50th=[11994], 99.90th=[16188], 99.95th=[17171], 00:29:04.586 | 99.99th=[17171] 00:29:04.586 bw ( KiB/s): min=22954, max=23488, per=99.89%, avg=23290.50, stdev=238.11, samples=4 00:29:04.586 iops : min= 5738, max= 5872, avg=5822.50, stdev=59.76, samples=4 00:29:04.586 lat (msec) : 4=0.05%, 10=33.90%, 20=66.05% 00:29:04.586 cpu : usr=58.57%, sys=38.65%, ctx=126, majf=0, minf=40 00:29:04.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:04.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:04.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:04.586 issued rwts: total=11734,11710,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:04.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:04.586 00:29:04.586 Run status group 0 (all jobs): 00:29:04.586 READ: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.8MiB (48.1MB), run=2009-2009msec 00:29:04.586 WRITE: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.7MiB (48.0MB), run=2009-2009msec 00:29:04.586 23:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:04.586 23:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:04.586 23:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:08.769 23:35:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:08.769 23:35:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:12.054 23:35:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:12.054 23:35:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:13.956 rmmod nvme_tcp 00:29:13.956 rmmod nvme_fabrics 00:29:13.956 rmmod nvme_keyring 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1488451 ']' 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1488451 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1488451 ']' 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1488451 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1488451 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1488451' 00:29:13.956 killing process with pid 1488451 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1488451 00:29:13.956 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1488451 00:29:14.214 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:14.214 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:14.214 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:14.214 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:14.214 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:14.214 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.214 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.214 23:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.118 23:35:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:16.118 00:29:16.118 real 0m37.005s 00:29:16.118 user 2m21.826s 00:29:16.118 sys 0m7.081s 00:29:16.118 23:35:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:16.118 23:35:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.118 ************************************ 00:29:16.118 END TEST nvmf_fio_host 00:29:16.118 ************************************ 00:29:16.118 23:35:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:16.118 23:35:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:16.118 23:35:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:16.118 23:35:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.118 ************************************ 00:29:16.118 START TEST nvmf_failover 00:29:16.118 ************************************ 00:29:16.118 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:16.376 * Looking for test storage... 00:29:16.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:16.376 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.376 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:16.376 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.376 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.376 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.376 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.376 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.376 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.376 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.376 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:29:16.377 23:35:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:18.279 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.279 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:18.279 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:18.280 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:18.280 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:18.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:29:18.280 00:29:18.280 --- 10.0.0.2 ping statistics --- 00:29:18.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.280 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:29:18.280 00:29:18.280 --- 10.0.0.1 ping statistics --- 00:29:18.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.280 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:18.280 23:35:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:18.540 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:18.540 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:18.540 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:18.540 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:18.540 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1494521 00:29:18.540 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:18.540 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1494521 00:29:18.540 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1494521 ']' 00:29:18.540 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.540 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:18.540 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.540 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:18.540 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:18.540 [2024-07-25 23:35:16.058912] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:18.540 [2024-07-25 23:35:16.059002] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.540 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.540 [2024-07-25 23:35:16.097315] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:18.540 [2024-07-25 23:35:16.127910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:18.540 [2024-07-25 23:35:16.226152] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.540 [2024-07-25 23:35:16.226214] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.540 [2024-07-25 23:35:16.226230] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.540 [2024-07-25 23:35:16.226244] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.540 [2024-07-25 23:35:16.226257] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.540 [2024-07-25 23:35:16.226314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:18.540 [2024-07-25 23:35:16.226442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:18.540 [2024-07-25 23:35:16.226445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.798 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:18.798 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:18.798 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:18.798 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:18.798 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:18.798 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.798 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:19.056 [2024-07-25 23:35:16.651639] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.056 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:19.314 Malloc0 00:29:19.314 23:35:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:19.572 23:35:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:19.830 23:35:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:20.087 [2024-07-25 23:35:17.777264] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.087 23:35:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:20.345 [2024-07-25 23:35:18.062170] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:20.604 23:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:20.604 [2024-07-25 23:35:18.306964] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:20.604 23:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1494809 00:29:20.604 23:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:20.604 23:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1494809 /var/tmp/bdevperf.sock 00:29:20.604 23:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:20.604 23:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1494809 ']' 00:29:20.604 23:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:20.604 23:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:20.863 23:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:20.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:20.863 23:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:20.863 23:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:21.120 23:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:21.120 23:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:21.120 23:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:21.690 NVMe0n1 00:29:21.690 23:35:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:21.949 00:29:21.949 23:35:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1494941 00:29:21.949 23:35:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:21.950 23:35:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:22.883 23:35:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:23.172 [2024-07-25 23:35:20.721883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.721950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.721982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.721995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 [2024-07-25 23:35:20.722435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93480 is same with the state(5) to be set 00:29:23.173 23:35:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:26.460 23:35:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:26.460 00:29:26.460 23:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:26.718 [2024-07-25 23:35:24.331668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.718 [2024-07-25 23:35:24.331724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.718 [2024-07-25 23:35:24.331739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.718 [2024-07-25 23:35:24.331753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.718 [2024-07-25 23:35:24.331765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.718 [2024-07-25 23:35:24.331777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.718 [2024-07-25 23:35:24.331789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.718 [2024-07-25 23:35:24.331802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.718 [2024-07-25 23:35:24.331814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.331826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.331837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.331849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.331861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.331874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.331886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.331898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.331910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.331922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.331934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.331946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.331958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.331969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 [2024-07-25 23:35:24.332647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94250 is same with the state(5) to be set 00:29:26.719 23:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:30.009 23:35:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.009 [2024-07-25 23:35:27.635549] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.009 23:35:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:30.943 23:35:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:31.202 [2024-07-25 23:35:28.888829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.888887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.888912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.888956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.888972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.888984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.888996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.202 [2024-07-25 23:35:28.889345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 [2024-07-25 23:35:28.889854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94ff0 is same with the state(5) to be set 00:29:31.203 23:35:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1494941 00:29:37.779 0 00:29:37.779 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1494809 00:29:37.779 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1494809 ']' 00:29:37.779 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1494809 00:29:37.779 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:37.779 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:37.779 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1494809 00:29:37.779 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:37.779 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:37.779 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1494809' 00:29:37.780 killing process with pid 1494809 00:29:37.780 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1494809 00:29:37.780 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1494809 00:29:37.780 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:37.780 [2024-07-25 23:35:18.371975] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:37.780 [2024-07-25 23:35:18.372087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494809 ] 00:29:37.780 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.780 [2024-07-25 23:35:18.409902] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:37.780 [2024-07-25 23:35:18.439710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.780 [2024-07-25 23:35:18.531439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.780 Running I/O for 15 seconds... 00:29:37.780 [2024-07-25 23:35:20.723846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.723888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.723915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.723931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.723947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.723961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.723976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.723990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.780 [2024-07-25 23:35:20.724727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.780 [2024-07-25 23:35:20.724740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.724755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.781 [2024-07-25 23:35:20.724768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.724782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.781 [2024-07-25 23:35:20.724795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.724810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.781 [2024-07-25 23:35:20.724823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.724837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.781 [2024-07-25 23:35:20.724850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.724864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.781 [2024-07-25 23:35:20.724877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.724892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.781 [2024-07-25 23:35:20.724905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.724920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.781 [2024-07-25 23:35:20.724934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.724948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.781 [2024-07-25 23:35:20.724964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.724980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.781 [2024-07-25 23:35:20.724993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.781 [2024-07-25 23:35:20.725020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.781 [2024-07-25 23:35:20.725065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.781 [2024-07-25 23:35:20.725692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.781 [2024-07-25 23:35:20.725705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.725720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.725736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.725751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.725764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.725779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.725792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.725806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.725820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.725834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.725847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.725861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.725874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.725889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.725902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.725916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.725933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.725947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.725960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.725975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.725996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.726024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.726068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.726115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.726148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.726178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.726207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.726235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.726263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.726291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.726320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.726348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.726392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.726420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.726447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.726475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.726502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.782 [2024-07-25 23:35:20.726533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.782 [2024-07-25 23:35:20.726582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77792 len:8 PRP1 0x0 PRP2 0x0 00:29:37.782 [2024-07-25 23:35:20.726595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.782 [2024-07-25 23:35:20.726624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.782 [2024-07-25 23:35:20.726635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77800 len:8 PRP1 0x0 PRP2 0x0 00:29:37.782 [2024-07-25 23:35:20.726648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.782 [2024-07-25 23:35:20.726679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.782 [2024-07-25 23:35:20.726690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77808 len:8 PRP1 0x0 PRP2 0x0 00:29:37.782 [2024-07-25 23:35:20.726702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.782 [2024-07-25 23:35:20.726715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.726726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.726737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77816 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.726759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.726772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.726783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.726812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77824 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.726825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.726839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.726850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.726861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77832 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.726874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.726887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.726898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.726910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77840 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.726922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.726935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.726950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.726962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77848 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.726975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.726989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.727000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.727011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77856 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.727024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.727038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.727055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.727075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77864 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.727089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.727107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.727119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.727131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77872 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.727144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.727157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.727169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.727180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77880 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.727193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.727206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.727217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.727228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77888 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.727242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.727255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.727266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.727277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77896 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.727290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.727304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.727315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.727327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77904 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.727340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.727364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.727376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.727387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77912 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.727400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.727422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.727433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.727445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77920 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.727457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.727471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.727482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.727493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77928 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.727506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.727536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.727547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.727559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77936 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.727572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.727586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.727598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.727609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77944 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.727622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.727637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.727648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.727659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77952 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.727672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.727685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.783 [2024-07-25 23:35:20.727696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.783 [2024-07-25 23:35:20.727708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77960 len:8 PRP1 0x0 PRP2 0x0 00:29:37.783 [2024-07-25 23:35:20.727720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.783 [2024-07-25 23:35:20.727734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.727745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.727757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77968 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.727773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.727786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.727797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.727809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77976 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.727822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.727835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.727846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.727858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77984 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.727871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.727884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.727895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.727906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77992 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.727918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.727932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.727943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.727954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78000 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.727967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.727980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.727991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.728002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78008 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.728014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.728027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.728039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.728050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78016 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.728069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.728084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.728095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.728107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78024 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.728119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.728132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.728143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.728158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78032 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.728171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.728184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.728195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.728207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78040 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.728219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.728233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.728244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.728255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78048 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.728267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.728280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.728291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.728302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78056 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.728315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.728327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.728338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.728350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78064 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.728362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.728375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.728386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.728398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78072 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.728410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.728423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.728434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.728445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78080 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.728458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.728471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.728482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.728493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78088 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.728506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.728519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.728533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.728545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78096 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.728558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.728571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.728582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.728593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77392 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.728606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.784 [2024-07-25 23:35:20.728619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.784 [2024-07-25 23:35:20.728630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.784 [2024-07-25 23:35:20.728641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77400 len:8 PRP1 0x0 PRP2 0x0 00:29:37.784 [2024-07-25 23:35:20.728654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:20.728714] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2004cb0 was disconnected and freed. reset controller. 00:29:37.785 [2024-07-25 23:35:20.728731] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:37.785 [2024-07-25 23:35:20.728768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.785 [2024-07-25 23:35:20.728787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:20.728802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.785 [2024-07-25 23:35:20.728816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:20.728830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.785 [2024-07-25 23:35:20.728843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:20.728857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.785 [2024-07-25 23:35:20.728870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:20.728892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.785 [2024-07-25 23:35:20.728957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2011850 (9): Bad file descriptor 00:29:37.785 [2024-07-25 23:35:20.732247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.785 [2024-07-25 23:35:20.841646] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:37.785 [2024-07-25 23:35:24.333788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.333829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.333855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.333876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.333893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.333906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.333921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.333934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.333949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.333962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.333977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.333990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.334004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.334017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.334032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.334045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.334082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.334100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.334116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.334129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.334145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.334158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.334173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.334187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.334202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.334215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.334231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.334244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.334263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.334277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.334292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.334306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.334321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.334335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.334351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.785 [2024-07-25 23:35:24.334364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.785 [2024-07-25 23:35:24.334396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.786 [2024-07-25 23:35:24.334409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.786 [2024-07-25 23:35:24.334437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.786 [2024-07-25 23:35:24.334464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.786 [2024-07-25 23:35:24.334492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.786 [2024-07-25 23:35:24.334519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.786 [2024-07-25 23:35:24.334547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.786 [2024-07-25 23:35:24.334575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.786 [2024-07-25 23:35:24.334602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.786 [2024-07-25 23:35:24.334648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.334678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.334707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.334735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.334764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.334793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.334821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.334851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.334880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.334908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.334937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.334965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.334980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.334993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.335008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.335025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.335041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.335054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.335077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.335092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.335107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.335121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.335136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.335149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.335165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.335178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.335193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.335207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.335222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.335235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.335250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.335263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.335278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.335292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.335307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.335320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.786 [2024-07-25 23:35:24.335335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.786 [2024-07-25 23:35:24.335349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.335984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.335999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.336012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.336027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.336041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.336056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.336078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.336094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.336108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.336123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.336139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.336155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.336169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.336184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.336198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.336213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.336227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.336243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.336256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.336271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.336284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.787 [2024-07-25 23:35:24.336299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.787 [2024-07-25 23:35:24.336313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.336328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.788 [2024-07-25 23:35:24.336342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.336357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.788 [2024-07-25 23:35:24.336370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.336385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.788 [2024-07-25 23:35:24.336399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.336414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.788 [2024-07-25 23:35:24.336427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.336442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.788 [2024-07-25 23:35:24.336456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.336471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.788 [2024-07-25 23:35:24.336484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.336503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.788 [2024-07-25 23:35:24.336516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.336531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.788 [2024-07-25 23:35:24.336545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.336575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.788 [2024-07-25 23:35:24.336591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85856 len:8 PRP1 0x0 PRP2 0x0 00:29:37.788 [2024-07-25 23:35:24.336605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.336653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.788 [2024-07-25 23:35:24.336674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.336689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.788 [2024-07-25 23:35:24.336702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.336716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.788 [2024-07-25 23:35:24.336729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.336743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.788 [2024-07-25 23:35:24.336757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.336770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011850 is same with the state(5) to be set 00:29:37.788 [2024-07-25 23:35:24.336991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.788 [2024-07-25 23:35:24.337012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.788 [2024-07-25 23:35:24.337025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85864 len:8 PRP1 0x0 PRP2 0x0 00:29:37.788 [2024-07-25 23:35:24.337038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.337055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.788 [2024-07-25 23:35:24.337078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.788 [2024-07-25 23:35:24.337091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85872 len:8 PRP1 0x0 PRP2 0x0 00:29:37.788 [2024-07-25 23:35:24.337104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.337117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.788 [2024-07-25 23:35:24.337129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.788 [2024-07-25 23:35:24.337140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85880 len:8 PRP1 0x0 PRP2 0x0 00:29:37.788 [2024-07-25 23:35:24.337153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.337171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.788 [2024-07-25 23:35:24.337183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.788 [2024-07-25 23:35:24.337194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85888 len:8 PRP1 0x0 PRP2 0x0 00:29:37.788 [2024-07-25 23:35:24.337206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.337219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.788 [2024-07-25 23:35:24.337231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.788 [2024-07-25 23:35:24.337242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85896 len:8 PRP1 0x0 PRP2 0x0 00:29:37.788 [2024-07-25 23:35:24.337255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.337268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.788 [2024-07-25 23:35:24.337279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.788 [2024-07-25 23:35:24.337290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85904 len:8 PRP1 0x0 PRP2 0x0 00:29:37.788 [2024-07-25 23:35:24.337303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.337316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.788 [2024-07-25 23:35:24.337327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.788 [2024-07-25 23:35:24.337338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85912 len:8 PRP1 0x0 PRP2 0x0 00:29:37.788 [2024-07-25 23:35:24.337351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.337364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.788 [2024-07-25 23:35:24.337375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.788 [2024-07-25 23:35:24.337387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85920 len:8 PRP1 0x0 PRP2 0x0 00:29:37.788 [2024-07-25 23:35:24.337399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.337413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.788 [2024-07-25 23:35:24.337423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.788 [2024-07-25 23:35:24.337435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85928 len:8 PRP1 0x0 PRP2 0x0 00:29:37.788 [2024-07-25 23:35:24.337447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.337460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.788 [2024-07-25 23:35:24.337471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.788 [2024-07-25 23:35:24.337482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85936 len:8 PRP1 0x0 PRP2 0x0 00:29:37.788 [2024-07-25 23:35:24.337495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.788 [2024-07-25 23:35:24.337508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.788 [2024-07-25 23:35:24.337519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.788 [2024-07-25 23:35:24.337530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85944 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.337546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.337559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.337570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.337581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85952 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.337594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.337607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.337617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.337628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85960 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.337641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.337654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.337665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.337676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85968 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.337689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.337702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.337712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.337723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85976 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.337736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.337749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.337760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.337771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85984 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.337784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.337797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.337808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.337819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85992 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.337832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.337844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.337855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.337867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86000 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.337879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.337892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.337909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.337920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86008 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.337933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.337946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.337957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.337968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85216 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.337981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.337993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.338004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.338015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85224 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.338028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.338041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.338052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.338070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85232 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.338084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.338098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.338109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.338120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85240 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.338133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.338145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.338156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.338168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85248 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.338181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.338194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.338204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.338216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85256 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.338228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.338241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.338252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.338264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85264 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.338277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.338293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.338305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.338316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85272 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.338329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.338342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.338353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.338364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85280 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.338377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.338390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.338407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.338419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85288 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.338432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.789 [2024-07-25 23:35:24.338446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.789 [2024-07-25 23:35:24.338457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.789 [2024-07-25 23:35:24.338469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85296 len:8 PRP1 0x0 PRP2 0x0 00:29:37.789 [2024-07-25 23:35:24.338481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.790 [2024-07-25 23:35:24.338494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.790 [2024-07-25 23:35:24.338506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.790 [2024-07-25 23:35:24.338517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85304 len:8 PRP1 0x0 PRP2 0x0 00:29:37.790 [2024-07-25 23:35:24.338530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.790 [2024-07-25 23:35:24.338543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.790 [2024-07-25 23:35:24.338554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.790 [2024-07-25 23:35:24.338565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85312 len:8 PRP1 0x0 PRP2 0x0 00:29:37.790 [2024-07-25 23:35:24.338578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.790 [2024-07-25 23:35:24.338591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.790 [2024-07-25 23:35:24.338602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.790 [2024-07-25 23:35:24.338613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85320 len:8 PRP1 0x0 PRP2 0x0 00:29:37.790 [2024-07-25 23:35:24.338626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.790 [2024-07-25 23:35:24.338639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.790 [2024-07-25 23:35:24.338650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.790 [2024-07-25 23:35:24.338661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85328 len:8 PRP1 0x0 PRP2 0x0 00:29:37.790 [2024-07-25 23:35:24.338677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.790 [2024-07-25 23:35:24.338690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.790 [2024-07-25 23:35:24.338702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.790 [2024-07-25 23:35:24.338713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85336 len:8 PRP1 0x0 PRP2 0x0 00:29:37.790 [2024-07-25 23:35:24.338725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.790 [2024-07-25 23:35:24.338738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.790 [2024-07-25 23:35:24.338749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.790 [2024-07-25 23:35:24.338760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84992 len:8 PRP1 0x0 PRP2 0x0 00:29:37.790 [2024-07-25 23:35:24.338773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.790 [2024-07-25 23:35:24.338785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.790 [2024-07-25 23:35:24.338801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.790 [2024-07-25 23:35:24.338813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85000 len:8 PRP1 0x0 PRP2 0x0 00:29:37.790 [2024-07-25 23:35:24.338826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.790 [2024-07-25 23:35:24.338839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.790 [2024-07-25 23:35:24.338851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.790 [2024-07-25 23:35:24.338862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85008 len:8 PRP1 0x0 PRP2 0x0 00:29:37.790 [2024-07-25 23:35:24.338875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.790 [2024-07-25 23:35:24.338888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.790 [2024-07-25 23:35:24.338899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.790 [2024-07-25 23:35:24.338910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85016 len:8 PRP1 0x0 PRP2 0x0 00:29:37.790 [2024-07-25 23:35:24.338923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.790 [2024-07-25 23:35:24.338936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.790 [2024-07-25 23:35:24.338947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.790 [2024-07-25 23:35:24.338964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85024 len:8 PRP1 0x0 PRP2 0x0 00:29:37.790 [2024-07-25 23:35:24.338977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.790 [2024-07-25 23:35:24.338991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.790 [2024-07-25 23:35:24.339002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.790 [2024-07-25 23:35:24.339013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85032 len:8 PRP1 0x0 PRP2 0x0 00:29:37.790 [2024-07-25 23:35:24.339026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.790 [2024-07-25 23:35:24.339039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.790 [2024-07-25 23:35:24.339050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.790 [2024-07-25 23:35:24.339071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85040 len:8 PRP1 0x0 PRP2 0x0 00:29:37.790 [2024-07-25 23:35:24.339085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.790 [2024-07-25 23:35:24.339099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.790 [2024-07-25 23:35:24.339110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.790 [2024-07-25 23:35:24.339122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85048 len:8 PRP1 0x0 PRP2 0x0 00:29:37.790 [2024-07-25 23:35:24.339135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.790 [2024-07-25 23:35:24.339148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.790 [2024-07-25 23:35:24.339159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.790 [2024-07-25 23:35:24.339170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85056 len:8 PRP1 0x0 PRP2 0x0 00:29:37.790 [2024-07-25 23:35:24.339183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.790 [2024-07-25 23:35:24.339196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.790 [2024-07-25 23:35:24.339211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.790 [2024-07-25 23:35:24.339223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85064 len:8 PRP1 0x0 PRP2 0x0 00:29:37.790 [2024-07-25 23:35:24.339236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.790 [2024-07-25 23:35:24.339249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.790 [2024-07-25 23:35:24.339260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.790 [2024-07-25 23:35:24.339272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85072 len:8 PRP1 0x0 PRP2 0x0 00:29:37.790 [2024-07-25 23:35:24.339284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.790 [2024-07-25 23:35:24.339298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.790 [2024-07-25 23:35:24.339309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.790 [2024-07-25 23:35:24.339320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85080 len:8 PRP1 0x0 PRP2 0x0 00:29:37.790 [2024-07-25 23:35:24.339333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.339346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.339357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.339373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85088 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.339386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.339399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.339410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.339422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85096 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.339434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.339448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.339462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.339474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85104 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.339487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.339500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.339511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.339522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85112 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.339535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.339548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.339560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.339571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85120 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.339583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.339596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.339608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.339620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85128 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.339632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.339646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.339667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.339678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85136 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.339691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.339704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.339715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.339726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85144 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.339739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.339752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.339763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.339781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85152 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.339794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.339807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.339818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.339830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85160 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.339842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.339860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.339871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.339882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85168 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.339895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.339908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.339919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.339930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85176 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.339943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.339956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.339968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.339981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85184 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.339994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.340007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.340018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.340029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85192 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.340042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.340055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.340073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.340085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85200 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.340098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.340111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.340123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.340134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85344 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.340147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.340161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.340172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.340188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85352 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.340201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.340215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.340226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.340237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85360 len:8 PRP1 0x0 PRP2 0x0 00:29:37.791 [2024-07-25 23:35:24.340253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.791 [2024-07-25 23:35:24.340266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.791 [2024-07-25 23:35:24.340277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.791 [2024-07-25 23:35:24.340288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85368 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.340301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.340314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.340325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.340336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85376 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.340349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.340362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.340373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.340384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85384 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.340397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.340410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.340421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.340433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85392 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.340445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.340458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.340469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.340480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85400 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.340493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.340506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.340517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.340528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85408 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.340541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.340554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.340565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.340581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85416 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.340594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.340607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.340618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.340632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85424 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.340650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.340664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.340675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.340687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85432 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.340699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.340713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.340723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.340735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85440 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.340748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.340761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.340772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.340783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85448 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.340796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.348316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.348344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.348358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85456 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.348373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.348386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.348398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.348409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85464 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.348422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.348436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.348447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.348459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85472 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.348471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.348485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.348496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.348508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85480 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.348521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.348539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.348551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.348562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85488 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.348576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.348589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.348600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.348611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85496 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.348623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.348637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.348648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.348659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85504 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.348672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.348684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.348696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.348707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85512 len:8 PRP1 0x0 PRP2 0x0 00:29:37.792 [2024-07-25 23:35:24.348719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.792 [2024-07-25 23:35:24.348732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.792 [2024-07-25 23:35:24.348743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.792 [2024-07-25 23:35:24.348754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85520 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.348767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.348780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.348791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.348802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85528 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.348815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.348828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.348839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.348850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85536 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.348863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.348875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.348886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.348898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85544 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.348913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.348927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.348938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.348950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85552 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.348962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.348975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.348986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.348997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85560 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.349010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.349023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.349034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.349045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85568 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.349068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.349084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.349096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.349107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85576 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.349120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.349133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.349144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.349155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85584 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.349168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.349181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.349192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.349203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85592 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.349216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.349229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.349239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.349251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85600 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.349263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.349276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.349287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.349302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85608 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.349315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.349328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.349340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.349351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85616 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.349364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.349377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.349387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.349399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85624 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.349411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.349424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.349435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.349446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85632 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.349459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.349472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.349483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.349494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85640 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.349507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.349520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.349531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.349542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85648 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.349555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.349567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.349578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.349589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85656 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.349602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.349615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.349626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.349637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85664 len:8 PRP1 0x0 PRP2 0x0 00:29:37.793 [2024-07-25 23:35:24.349650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.793 [2024-07-25 23:35:24.349663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.793 [2024-07-25 23:35:24.349677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.793 [2024-07-25 23:35:24.349689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85672 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.349702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.349715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.349726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.349737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85680 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.349750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.349763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.349774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.349785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85688 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.349798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.349811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.349822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.349833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85696 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.349845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.349858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.349869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.349881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85704 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.349893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.349906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.349917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.349929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85712 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.349941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.349954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.349965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.349976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85720 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.349989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.350002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.350013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.350024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85728 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.350037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.350053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.350073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.350086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85736 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.350099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.350112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.350124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.350135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85744 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.350148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.350161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.350172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.350183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85752 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.350196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.350209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.350219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.350231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85760 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.350243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.350256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.350267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.350279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85768 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.350291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.350305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.350315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.350327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85776 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.350339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.350352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.350363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.350374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85784 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.350387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.350400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.350410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.350422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85792 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.350438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.350452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.350463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.350474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85208 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.350487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.350500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.350511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.350522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85800 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.350534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.350547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.350558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.350569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85808 len:8 PRP1 0x0 PRP2 0x0 00:29:37.794 [2024-07-25 23:35:24.350582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.794 [2024-07-25 23:35:24.350594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.794 [2024-07-25 23:35:24.350605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.794 [2024-07-25 23:35:24.350616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85816 len:8 PRP1 0x0 PRP2 0x0 00:29:37.795 [2024-07-25 23:35:24.350629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:24.350642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.795 [2024-07-25 23:35:24.350653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.795 [2024-07-25 23:35:24.350664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85824 len:8 PRP1 0x0 PRP2 0x0 00:29:37.795 [2024-07-25 23:35:24.350678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:24.350691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.795 [2024-07-25 23:35:24.350702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.795 [2024-07-25 23:35:24.350713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85832 len:8 PRP1 0x0 PRP2 0x0 00:29:37.795 [2024-07-25 23:35:24.350726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:24.350739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.795 [2024-07-25 23:35:24.350750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.795 [2024-07-25 23:35:24.350761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85840 len:8 PRP1 0x0 PRP2 0x0 00:29:37.795 [2024-07-25 23:35:24.350774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:24.350787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.795 [2024-07-25 23:35:24.350798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.795 [2024-07-25 23:35:24.350813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85848 len:8 PRP1 0x0 PRP2 0x0 00:29:37.795 [2024-07-25 23:35:24.350826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:24.350840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.795 [2024-07-25 23:35:24.350850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.795 [2024-07-25 23:35:24.350862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85856 len:8 PRP1 0x0 PRP2 0x0 00:29:37.795 [2024-07-25 23:35:24.350874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:24.350936] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2035670 was disconnected and freed. reset controller. 00:29:37.795 [2024-07-25 23:35:24.350954] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:37.795 [2024-07-25 23:35:24.350969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.795 [2024-07-25 23:35:24.351025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2011850 (9): Bad file descriptor 00:29:37.795 [2024-07-25 23:35:24.354293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.795 [2024-07-25 23:35:24.429020] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:37.795 [2024-07-25 23:35:28.890697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.890735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.890761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.890776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.890792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.890805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.890820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.890834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.890849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.890861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.890876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.890889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.890903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.890916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.890931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.890949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.890964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.890977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.890992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.891004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.891019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.891032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.891073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.891089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.891107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.891122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.891137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.891151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.891167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.891181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.891198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.891211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.891226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.891239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.891255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.891268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.891284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.891297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.891312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.795 [2024-07-25 23:35:28.891325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.795 [2024-07-25 23:35:28.891368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.796 [2024-07-25 23:35:28.891383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.796 [2024-07-25 23:35:28.891411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.796 [2024-07-25 23:35:28.891438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.796 [2024-07-25 23:35:28.891466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.796 [2024-07-25 23:35:28.891493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.796 [2024-07-25 23:35:28.891521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.796 [2024-07-25 23:35:28.891548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.891576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.891604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.891631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.891658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.891686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.891717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.891747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.891775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.891818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.891846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.891874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.891902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.891931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.891959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.891974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.891987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.892002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.892015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.892030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.892043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.892066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.892082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.892097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.892115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.892130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.892144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.892159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.892172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.796 [2024-07-25 23:35:28.892187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.796 [2024-07-25 23:35:28.892200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.892979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.892992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.893007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.893020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.893035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.893049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.893072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.893087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.893102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.893115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.797 [2024-07-25 23:35:28.893130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.797 [2024-07-25 23:35:28.893143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.893159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-25 23:35:28.893173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.893188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-25 23:35:28.893201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.893216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-25 23:35:28.893233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.893249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.798 [2024-07-25 23:35:28.893262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.893283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-25 23:35:28.893298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.893313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-25 23:35:28.893326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.893341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-25 23:35:28.893354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.893369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-25 23:35:28.893383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.893398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-25 23:35:28.893411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.893426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-25 23:35:28.893439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.893454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-25 23:35:28.893468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.893497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.798 [2024-07-25 23:35:28.893514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21384 len:8 PRP1 0x0 PRP2 0x0 00:29:37.798 [2024-07-25 23:35:28.893527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.893588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.798 [2024-07-25 23:35:28.893610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.893625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.798 [2024-07-25 23:35:28.893639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.893653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.798 [2024-07-25 23:35:28.893671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.893686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.798 [2024-07-25 23:35:28.893700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.893714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011850 is same with the state(5) to be set 00:29:37.798 [2024-07-25 23:35:28.893934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.798 [2024-07-25 23:35:28.893970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.798 [2024-07-25 23:35:28.893983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21392 len:8 PRP1 0x0 PRP2 0x0 00:29:37.798 [2024-07-25 23:35:28.893997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.894013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.798 [2024-07-25 23:35:28.894026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.798 [2024-07-25 23:35:28.894038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21400 len:8 PRP1 0x0 PRP2 0x0 00:29:37.798 [2024-07-25 23:35:28.894051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.894073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.798 [2024-07-25 23:35:28.894085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.798 [2024-07-25 23:35:28.894097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:8 PRP1 0x0 PRP2 0x0 00:29:37.798 [2024-07-25 23:35:28.894110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.894122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.798 [2024-07-25 23:35:28.894133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.798 [2024-07-25 23:35:28.894144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21416 len:8 PRP1 0x0 PRP2 0x0 00:29:37.798 [2024-07-25 23:35:28.894157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.894170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.798 [2024-07-25 23:35:28.894181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.798 [2024-07-25 23:35:28.894192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21424 len:8 PRP1 0x0 PRP2 0x0 00:29:37.798 [2024-07-25 23:35:28.894204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.894217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.798 [2024-07-25 23:35:28.894228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.798 [2024-07-25 23:35:28.894239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21432 len:8 PRP1 0x0 PRP2 0x0 00:29:37.798 [2024-07-25 23:35:28.894251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.894264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.798 [2024-07-25 23:35:28.894275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.798 [2024-07-25 23:35:28.894286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:8 PRP1 0x0 PRP2 0x0 00:29:37.798 [2024-07-25 23:35:28.894304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.894317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.798 [2024-07-25 23:35:28.894330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.798 [2024-07-25 23:35:28.894342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21448 len:8 PRP1 0x0 PRP2 0x0 00:29:37.798 [2024-07-25 23:35:28.894356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.894369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.798 [2024-07-25 23:35:28.894380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.798 [2024-07-25 23:35:28.894391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21456 len:8 PRP1 0x0 PRP2 0x0 00:29:37.798 [2024-07-25 23:35:28.894404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.798 [2024-07-25 23:35:28.894418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.798 [2024-07-25 23:35:28.894429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.894440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21464 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.894453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.894467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.894478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.894489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.894502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.894515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.894526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.894538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21480 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.894550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.894563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.894574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.894586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21488 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.894598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.894611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.894622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.894634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21496 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.894647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.894660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.894674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.894686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.894699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.894712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.894723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.894734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21512 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.894747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.894760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.894771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.894783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21520 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.894795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.894809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.894820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.894831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21528 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.894844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.894858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.894869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.894881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.894894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.894907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.894918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.894930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20744 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.894943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.894956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.894967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.894978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20752 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.894991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.895004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.895015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.895026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20760 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.895039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.895055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.895074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.895102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.895115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.895129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.895141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.895158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20776 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.895172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.895185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.895197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.895208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20784 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.895221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.895239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.895251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.895263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20792 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.895276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.895290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.895301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.895312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.895325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.895338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.895350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.799 [2024-07-25 23:35:28.895361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20808 len:8 PRP1 0x0 PRP2 0x0 00:29:37.799 [2024-07-25 23:35:28.895374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.799 [2024-07-25 23:35:28.895387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.799 [2024-07-25 23:35:28.895413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.895425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20816 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.895438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.895451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.895462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.895473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20824 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.895489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.895502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.895513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.895524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.895536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.895549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.895560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.895576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20840 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.895589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.895602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.895613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.895624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20848 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.895637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.895655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.895666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.895677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20856 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.895690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.895703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.895713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.895724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.895737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.895750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.895761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.895772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20520 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.895784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.895798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.895809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.895820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20528 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.895832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.895846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.895856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.895873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20536 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.895886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.895899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.895911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.895922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20544 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.895935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.895948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.895959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.895975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20552 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.895987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.896000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.896011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.896022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20560 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.896035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.896053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.896071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.896083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20568 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.896096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.896109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.896120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.896131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.896143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.896156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.896167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.896178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20584 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.896190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.896203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.896214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.896224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20592 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.896237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.896249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.896263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.896275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20600 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.896288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.896300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.896311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.800 [2024-07-25 23:35:28.896322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:8 PRP1 0x0 PRP2 0x0 00:29:37.800 [2024-07-25 23:35:28.896334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.800 [2024-07-25 23:35:28.896347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.800 [2024-07-25 23:35:28.896358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.801 [2024-07-25 23:35:28.896370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20616 len:8 PRP1 0x0 PRP2 0x0 00:29:37.801 [2024-07-25 23:35:28.896383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.801 [2024-07-25 23:35:28.896397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.801 [2024-07-25 23:35:28.896407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.801 [2024-07-25 23:35:28.896418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20624 len:8 PRP1 0x0 PRP2 0x0 00:29:37.801 [2024-07-25 23:35:28.896431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.801 [2024-07-25 23:35:28.896445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.801 [2024-07-25 23:35:28.896456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.801 [2024-07-25 23:35:28.896468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20632 len:8 PRP1 0x0 PRP2 0x0 00:29:37.801 [2024-07-25 23:35:28.896480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.801 [2024-07-25 23:35:28.896493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.801 [2024-07-25 23:35:28.896504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.801 [2024-07-25 23:35:28.896516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20640 len:8 PRP1 0x0 PRP2 0x0 00:29:37.801 [2024-07-25 23:35:28.896529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.801 [2024-07-25 23:35:28.896542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.801 [2024-07-25 23:35:28.896552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.801 [2024-07-25 23:35:28.896563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20648 len:8 PRP1 0x0 PRP2 0x0 00:29:37.801 [2024-07-25 23:35:28.896576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.801 [2024-07-25 23:35:28.896589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.801 [2024-07-25 23:35:28.896601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.801 [2024-07-25 23:35:28.896612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20656 len:8 PRP1 0x0 PRP2 0x0 00:29:37.801 [2024-07-25 23:35:28.896625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.801 [2024-07-25 23:35:28.896641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.801 [2024-07-25 23:35:28.896653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.801 [2024-07-25 23:35:28.896665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20664 len:8 PRP1 0x0 PRP2 0x0 00:29:37.801 [2024-07-25 23:35:28.896677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.801 [2024-07-25 23:35:28.896690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.801 [2024-07-25 23:35:28.896701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.801 [2024-07-25 23:35:28.896712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20672 len:8 PRP1 0x0 PRP2 0x0 00:29:37.801 [2024-07-25 23:35:28.896725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.801 [2024-07-25 23:35:28.896738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.801 [2024-07-25 23:35:28.896749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.801 [2024-07-25 23:35:28.896760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20680 len:8 PRP1 0x0 PRP2 0x0 00:29:37.801 [2024-07-25 23:35:28.896773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.801 [2024-07-25 23:35:28.896786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.801 [2024-07-25 23:35:28.896797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.801 [2024-07-25 23:35:28.896808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20688 len:8 PRP1 0x0 PRP2 0x0 00:29:37.801 [2024-07-25 23:35:28.896821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.801 [2024-07-25 23:35:28.896834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.801 [2024-07-25 23:35:28.896845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.801 [2024-07-25 23:35:28.896856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20696 len:8 PRP1 0x0 PRP2 0x0 00:29:37.801 [2024-07-25 23:35:28.896869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.801 [2024-07-25 23:35:28.896882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.801 [2024-07-25 23:35:28.896893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.801 [2024-07-25 23:35:28.896904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:8 PRP1 0x0 PRP2 0x0 00:29:37.801 [2024-07-25 23:35:28.896917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.801 [2024-07-25 23:35:28.896930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.801 [2024-07-25 23:35:28.896941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.801 [2024-07-25 23:35:28.896952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20712 len:8 PRP1 0x0 PRP2 0x0 00:29:37.801 [2024-07-25 23:35:28.896964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.801 [2024-07-25 23:35:28.896977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.801 [2024-07-25 23:35:28.896988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.801 [2024-07-25 23:35:28.896999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20720 len:8 PRP1 0x0 PRP2 0x0 00:29:37.801 [2024-07-25 23:35:28.897015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.801 [2024-07-25 23:35:28.897029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.801 [2024-07-25 23:35:28.897040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.801 [2024-07-25 23:35:28.897051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20728 len:8 PRP1 0x0 PRP2 0x0 00:29:37.801 [2024-07-25 23:35:28.897069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.801 [2024-07-25 23:35:28.897085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.801 [2024-07-25 23:35:28.897096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.801 [2024-07-25 23:35:28.897108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20872 len:8 PRP1 0x0 PRP2 0x0 00:29:37.801 [2024-07-25 23:35:28.897121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.801 [2024-07-25 23:35:28.897134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.801 [2024-07-25 23:35:28.897145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.801 [2024-07-25 23:35:28.897157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20880 len:8 PRP1 0x0 PRP2 0x0 00:29:37.801 [2024-07-25 23:35:28.897170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.801 [2024-07-25 23:35:28.897183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.801 [2024-07-25 23:35:28.897194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.897205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20888 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.897218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.897231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.897242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.897254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.897266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.897280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.897291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.897302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20904 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.897315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.897328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.897339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.897350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20912 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.897362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.897376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.897387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.897401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20920 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.897414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.897427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.897439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.897450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.897463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.897476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.897487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.897498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20936 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.897511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.897524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.897536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.897547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20944 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.897560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.897573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.897584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.897596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20952 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.897608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.897622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.897633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.897644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.897657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.897670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.897681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.897692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20968 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.897705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.897719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.897729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.897756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20976 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.897769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.897785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.897796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.897808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20984 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.897821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.897834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.897844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.897856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.897868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.897881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.897892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.897904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21000 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.897916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.897929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.897941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.897957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21008 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.897970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.897983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.897994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.898005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21016 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.898018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.898031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.898041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.898053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.898076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.898090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.802 [2024-07-25 23:35:28.898101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.802 [2024-07-25 23:35:28.898112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21032 len:8 PRP1 0x0 PRP2 0x0 00:29:37.802 [2024-07-25 23:35:28.898125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.802 [2024-07-25 23:35:28.898138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.898149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.898165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21040 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.898181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.898194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.898205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.898216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21048 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.898229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.898242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.898252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.898263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.898276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.898289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.898300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.898311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21064 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.898324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.898336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.898347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.898363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21072 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.898376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.898389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.898399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.898410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21080 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.898423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.898436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.898446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.898457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.898470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.898483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.898494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.898504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21096 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.898517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.898530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.898541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.898560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21104 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.898573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.898586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.898597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.898608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21112 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.898621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.898634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.898644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.898656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.898668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.898681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.898692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.898703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21128 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.898716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.898729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.898740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.898758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21136 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.898770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.898783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.898794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.898805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21144 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.898818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.898831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.903864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.903893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.903909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.903924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.903936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.903948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21160 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.903961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.903974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.903990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.904003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21168 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.904016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.904029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.904041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.904052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21176 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.904073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.904088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.904099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.803 [2024-07-25 23:35:28.904110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:8 PRP1 0x0 PRP2 0x0 00:29:37.803 [2024-07-25 23:35:28.904122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.803 [2024-07-25 23:35:28.904135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.803 [2024-07-25 23:35:28.904146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21192 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.904170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.904183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.904194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21200 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.904219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.904232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.904243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21208 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.904267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.904280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.904297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.904321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.904334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.904345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21224 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.904369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.904386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.904398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21232 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.904422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.904435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.904446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21240 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.904470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.904483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.904494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.904518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.904531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.904542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21256 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.904566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.904579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.904591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21264 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.904615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.904628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.904639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21272 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.904663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.904676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.904687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.904711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.904724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.904735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21288 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.904763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.904777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.904788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21296 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.904811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.904824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.904835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21304 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.904859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.904872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.904883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.904907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.904920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.904931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21320 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.904955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.904968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.904979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.904991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20736 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.905004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.905017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.905028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.905040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21328 len:8 PRP1 0x0 PRP2 0x0 00:29:37.804 [2024-07-25 23:35:28.905052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.804 [2024-07-25 23:35:28.905072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.804 [2024-07-25 23:35:28.905084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.804 [2024-07-25 23:35:28.905095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21336 len:8 PRP1 0x0 PRP2 0x0 00:29:37.805 [2024-07-25 23:35:28.905108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.805 [2024-07-25 23:35:28.905122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.805 [2024-07-25 23:35:28.905132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.805 [2024-07-25 23:35:28.905147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:8 PRP1 0x0 PRP2 0x0 00:29:37.805 [2024-07-25 23:35:28.905160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.805 [2024-07-25 23:35:28.905173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.805 [2024-07-25 23:35:28.905184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.805 [2024-07-25 23:35:28.905195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21352 len:8 PRP1 0x0 PRP2 0x0 00:29:37.805 [2024-07-25 23:35:28.905208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.805 [2024-07-25 23:35:28.905222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.805 [2024-07-25 23:35:28.905232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.805 [2024-07-25 23:35:28.905244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21360 len:8 PRP1 0x0 PRP2 0x0 00:29:37.805 [2024-07-25 23:35:28.905257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.805 [2024-07-25 23:35:28.905270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.805 [2024-07-25 23:35:28.905281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.805 [2024-07-25 23:35:28.905292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21368 len:8 PRP1 0x0 PRP2 0x0 00:29:37.805 [2024-07-25 23:35:28.905305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.805 [2024-07-25 23:35:28.905319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.805 [2024-07-25 23:35:28.905330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.805 [2024-07-25 23:35:28.905341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:8 PRP1 0x0 PRP2 0x0 00:29:37.805 [2024-07-25 23:35:28.905354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.805 [2024-07-25 23:35:28.905367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.805 [2024-07-25 23:35:28.905378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.805 [2024-07-25 23:35:28.905389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21384 len:8 PRP1 0x0 PRP2 0x0 00:29:37.805 [2024-07-25 23:35:28.905402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.805 [2024-07-25 23:35:28.905463] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2035330 was disconnected and freed. reset controller. 00:29:37.805 [2024-07-25 23:35:28.905481] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:37.805 [2024-07-25 23:35:28.905496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.805 [2024-07-25 23:35:28.905562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2011850 (9): Bad file descriptor 00:29:37.805 [2024-07-25 23:35:28.908811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.805 [2024-07-25 23:35:28.948046] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:37.805 00:29:37.805 Latency(us) 00:29:37.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.805 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:37.805 Verification LBA range: start 0x0 length 0x4000 00:29:37.805 NVMe0n1 : 15.02 8473.05 33.10 567.73 0.00 14130.41 534.00 28544.57 00:29:37.805 =================================================================================================================== 00:29:37.805 Total : 8473.05 33.10 567.73 0.00 14130.41 534.00 28544.57 00:29:37.805 Received shutdown signal, test time was about 15.000000 seconds 00:29:37.805 00:29:37.805 Latency(us) 00:29:37.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.805 =================================================================================================================== 00:29:37.805 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:37.805 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:37.805 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:37.805 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:37.805 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1496776 00:29:37.805 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:37.805 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1496776 /var/tmp/bdevperf.sock 00:29:37.805 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1496776 ']' 00:29:37.805 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:37.805 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:37.805 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:37.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:37.805 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:37.805 23:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:37.805 23:35:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:37.805 23:35:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:29:37.805 23:35:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:37.805 [2024-07-25 23:35:35.422483] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:37.805 23:35:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:38.063 [2024-07-25 23:35:35.671151] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:38.063 23:35:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:38.320 NVMe0n1 00:29:38.320 23:35:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:38.888 00:29:38.888 23:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:39.146 00:29:39.146 23:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:39.146 23:35:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:39.404 23:35:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:39.663 23:35:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:29:42.950 23:35:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:42.950 23:35:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:29:42.950 23:35:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1497445 00:29:42.950 23:35:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:42.950 23:35:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1497445 00:29:44.324 0 00:29:44.324 23:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:44.324 [2024-07-25 23:35:34.942299] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:44.324 [2024-07-25 23:35:34.942407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496776 ] 00:29:44.324 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.324 [2024-07-25 23:35:34.974528] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:44.324 [2024-07-25 23:35:35.002860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.324 [2024-07-25 23:35:35.085921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.324 [2024-07-25 23:35:37.301733] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:44.324 [2024-07-25 23:35:37.301832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.324 [2024-07-25 23:35:37.301856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.324 [2024-07-25 23:35:37.301873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.324 [2024-07-25 23:35:37.301887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.324 [2024-07-25 23:35:37.301901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.324 [2024-07-25 23:35:37.301916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.324 [2024-07-25 23:35:37.301930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.324 [2024-07-25 23:35:37.301944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.324 [2024-07-25 23:35:37.301959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.324 [2024-07-25 23:35:37.302004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.324 [2024-07-25 23:35:37.302037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ab850 (9): Bad file descriptor 00:29:44.324 [2024-07-25 23:35:37.353183] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:44.324 Running I/O for 1 seconds... 00:29:44.324 00:29:44.324 Latency(us) 00:29:44.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.324 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:44.324 Verification LBA range: start 0x0 length 0x4000 00:29:44.324 NVMe0n1 : 1.00 7830.77 30.59 0.00 0.00 16280.80 731.21 21165.70 00:29:44.324 =================================================================================================================== 00:29:44.324 Total : 7830.77 30.59 0.00 0.00 16280.80 731.21 21165.70 00:29:44.324 23:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:44.324 23:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:29:44.324 23:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:44.582 23:35:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:44.582 23:35:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:29:44.839 23:35:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:45.096 23:35:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:29:48.417 23:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:48.417 23:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:29:48.417 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1496776 00:29:48.417 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1496776 ']' 00:29:48.417 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1496776 00:29:48.417 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:48.417 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:48.417 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1496776 00:29:48.417 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:48.417 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:48.417 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1496776' 00:29:48.417 killing process with pid 1496776 00:29:48.417 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1496776 00:29:48.417 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1496776 00:29:48.676 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:29:48.676 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:48.934 rmmod nvme_tcp 00:29:48.934 rmmod nvme_fabrics 00:29:48.934 rmmod nvme_keyring 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1494521 ']' 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1494521 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1494521 ']' 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1494521 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1494521 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:48.934 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1494521' 00:29:48.935 killing process with pid 1494521 00:29:48.935 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1494521 00:29:48.935 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1494521 00:29:49.193 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:49.193 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:49.193 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:49.193 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:49.193 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:49.193 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.193 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.193 23:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.731 23:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:51.731 00:29:51.731 real 0m35.119s 00:29:51.731 user 2m2.806s 00:29:51.731 sys 0m6.338s 00:29:51.731 23:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:51.731 23:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:51.731 ************************************ 00:29:51.731 END TEST nvmf_failover 00:29:51.731 ************************************ 00:29:51.731 23:35:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:51.731 23:35:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:51.731 23:35:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:51.731 23:35:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.731 ************************************ 00:29:51.731 START TEST nvmf_host_discovery 00:29:51.731 ************************************ 00:29:51.731 23:35:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:51.731 * Looking for test storage... 00:29:51.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:51.731 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:29:51.732 23:35:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:53.637 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:53.637 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.637 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:53.638 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:53.638 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:53.638 23:35:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:53.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:29:53.638 00:29:53.638 --- 10.0.0.2 ping statistics --- 00:29:53.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.638 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:29:53.638 00:29:53.638 --- 10.0.0.1 ping statistics --- 00:29:53.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.638 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1500045 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1500045 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1500045 ']' 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:53.638 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.638 [2024-07-25 23:35:51.190499] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:53.638 [2024-07-25 23:35:51.190570] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.638 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.638 [2024-07-25 23:35:51.227985] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:53.638 [2024-07-25 23:35:51.255200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.638 [2024-07-25 23:35:51.349479] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.638 [2024-07-25 23:35:51.349532] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.638 [2024-07-25 23:35:51.349546] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.638 [2024-07-25 23:35:51.349558] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.638 [2024-07-25 23:35:51.349574] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.638 [2024-07-25 23:35:51.349632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.896 [2024-07-25 23:35:51.487732] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.896 [2024-07-25 23:35:51.495926] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.896 null0 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.896 null1 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1500190 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1500190 /tmp/host.sock 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1500190 ']' 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:53.896 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:53.896 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:53.896 [2024-07-25 23:35:51.572391] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:29:53.896 [2024-07-25 23:35:51.572478] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500190 ] 00:29:53.896 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.896 [2024-07-25 23:35:51.604332] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:54.154 [2024-07-25 23:35:51.632898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.154 [2024-07-25 23:35:51.722219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:54.154 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:54.413 23:35:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:54.413 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.672 [2024-07-25 23:35:52.149705] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:29:54.672 23:35:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:29:55.240 [2024-07-25 23:35:52.865203] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:55.240 [2024-07-25 23:35:52.865243] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:55.240 [2024-07-25 23:35:52.865271] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:55.240 [2024-07-25 23:35:52.951545] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:55.500 [2024-07-25 23:35:53.178695] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:55.500 [2024-07-25 23:35:53.178728] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:55.758 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.759 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:56.017 [2024-07-25 23:35:53.633922] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:56.017 [2024-07-25 23:35:53.634995] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:56.017 [2024-07-25 23:35:53.635048] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:56.017 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.275 [2024-07-25 23:35:53.761901] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:56.275 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:29:56.275 23:35:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:29:56.275 [2024-07-25 23:35:53.860541] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:56.275 [2024-07-25 23:35:53.860569] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:56.275 [2024-07-25 23:35:53.860587] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.211 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.212 [2024-07-25 23:35:54.849864] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:57.212 [2024-07-25 23:35:54.849893] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:57.212 [2024-07-25 23:35:54.855126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.212 [2024-07-25 23:35:54.855161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.212 [2024-07-25 23:35:54.855188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.212 [2024-07-25 23:35:54.855202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.212 [2024-07-25 23:35:54.855217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.212 [2024-07-25 23:35:54.855230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.212 [2024-07-25 23:35:54.855245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.212 [2024-07-25 23:35:54.855258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.212 [2024-07-25 23:35:54.855272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9786e0 is same with the state(5) to be set 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:57.212 [2024-07-25 23:35:54.865112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9786e0 (9): Bad file descriptor 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.212 [2024-07-25 23:35:54.875155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:57.212 [2024-07-25 23:35:54.875381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.212 [2024-07-25 23:35:54.875425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9786e0 with addr=10.0.0.2, port=4420 00:29:57.212 [2024-07-25 23:35:54.875445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9786e0 is same with the state(5) to be set 00:29:57.212 [2024-07-25 23:35:54.875472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9786e0 (9): Bad file descriptor 00:29:57.212 [2024-07-25 23:35:54.875497] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.212 [2024-07-25 23:35:54.875514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:57.212 [2024-07-25 23:35:54.875537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.212 [2024-07-25 23:35:54.875561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.212 [2024-07-25 23:35:54.885243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:57.212 [2024-07-25 23:35:54.885451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.212 [2024-07-25 23:35:54.885496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9786e0 with addr=10.0.0.2, port=4420 00:29:57.212 [2024-07-25 23:35:54.885515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9786e0 is same with the state(5) to be set 00:29:57.212 [2024-07-25 23:35:54.885540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9786e0 (9): Bad file descriptor 00:29:57.212 [2024-07-25 23:35:54.885564] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.212 [2024-07-25 23:35:54.885581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:57.212 [2024-07-25 23:35:54.885596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.212 [2024-07-25 23:35:54.885618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:57.212 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:57.212 [2024-07-25 23:35:54.895340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:57.212 [2024-07-25 23:35:54.895530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.212 [2024-07-25 23:35:54.895562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9786e0 with addr=10.0.0.2, port=4420 00:29:57.212 [2024-07-25 23:35:54.895581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9786e0 is same with the state(5) to be set 00:29:57.212 [2024-07-25 23:35:54.895606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9786e0 (9): Bad file descriptor 00:29:57.212 [2024-07-25 23:35:54.895630] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.212 [2024-07-25 23:35:54.895646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:57.212 [2024-07-25 23:35:54.895661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.212 [2024-07-25 23:35:54.895688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.212 [2024-07-25 23:35:54.905436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:57.212 [2024-07-25 23:35:54.905615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.212 [2024-07-25 23:35:54.905646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9786e0 with addr=10.0.0.2, port=4420 00:29:57.212 [2024-07-25 23:35:54.905664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9786e0 is same with the state(5) to be set 00:29:57.213 [2024-07-25 23:35:54.905689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9786e0 (9): Bad file descriptor 00:29:57.213 [2024-07-25 23:35:54.905725] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.213 [2024-07-25 23:35:54.905744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:57.213 [2024-07-25 23:35:54.905759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.213 [2024-07-25 23:35:54.905782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.213 [2024-07-25 23:35:54.915518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:57.213 [2024-07-25 23:35:54.915736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.213 [2024-07-25 23:35:54.915764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9786e0 with addr=10.0.0.2, port=4420 00:29:57.213 [2024-07-25 23:35:54.915781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9786e0 is same with the state(5) to be set 00:29:57.213 [2024-07-25 23:35:54.915803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9786e0 (9): Bad file descriptor 00:29:57.213 [2024-07-25 23:35:54.915850] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.213 [2024-07-25 23:35:54.915869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:57.213 [2024-07-25 23:35:54.915884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.213 [2024-07-25 23:35:54.915903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.213 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.213 [2024-07-25 23:35:54.925598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:57.213 [2024-07-25 23:35:54.925790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.213 [2024-07-25 23:35:54.925819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9786e0 with addr=10.0.0.2, port=4420 00:29:57.213 [2024-07-25 23:35:54.925836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9786e0 is same with the state(5) to be set 00:29:57.213 [2024-07-25 23:35:54.925858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9786e0 (9): Bad file descriptor 00:29:57.213 [2024-07-25 23:35:54.925892] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.213 [2024-07-25 23:35:54.925909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:57.213 [2024-07-25 23:35:54.925924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.213 [2024-07-25 23:35:54.925945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.213 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:57.213 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:57.213 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:57.213 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:57.213 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:57.213 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:57.213 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:57.213 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:29:57.213 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:57.213 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.213 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.213 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:57.213 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:57.213 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:57.471 [2024-07-25 23:35:54.935676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:57.471 [2024-07-25 23:35:54.935850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-25 23:35:54.935882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9786e0 with addr=10.0.0.2, port=4420 00:29:57.471 [2024-07-25 23:35:54.935901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9786e0 is same with the state(5) to be set 00:29:57.471 [2024-07-25 23:35:54.935927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9786e0 (9): Bad file descriptor 00:29:57.471 [2024-07-25 23:35:54.936160] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.471 [2024-07-25 23:35:54.936184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:57.471 [2024-07-25 23:35:54.936200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.471 [2024-07-25 23:35:54.936220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.471 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.471 [2024-07-25 23:35:54.945758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:57.471 [2024-07-25 23:35:54.945945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-25 23:35:54.945977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9786e0 with addr=10.0.0.2, port=4420 00:29:57.471 [2024-07-25 23:35:54.945995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9786e0 is same with the state(5) to be set 00:29:57.471 [2024-07-25 23:35:54.946021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9786e0 (9): Bad file descriptor 00:29:57.471 [2024-07-25 23:35:54.946057] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.471 [2024-07-25 23:35:54.946089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:57.471 [2024-07-25 23:35:54.946105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.471 [2024-07-25 23:35:54.946142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.471 [2024-07-25 23:35:54.955836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:57.471 [2024-07-25 23:35:54.956027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-25 23:35:54.956076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9786e0 with addr=10.0.0.2, port=4420 00:29:57.471 [2024-07-25 23:35:54.956112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9786e0 is same with the state(5) to be set 00:29:57.471 [2024-07-25 23:35:54.956135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9786e0 (9): Bad file descriptor 00:29:57.471 [2024-07-25 23:35:54.956180] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.471 [2024-07-25 23:35:54.956201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:57.471 [2024-07-25 23:35:54.956216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.471 [2024-07-25 23:35:54.956236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.471 [2024-07-25 23:35:54.965932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:57.471 [2024-07-25 23:35:54.966153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-25 23:35:54.966182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9786e0 with addr=10.0.0.2, port=4420 00:29:57.471 [2024-07-25 23:35:54.966199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9786e0 is same with the state(5) to be set 00:29:57.471 [2024-07-25 23:35:54.966221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9786e0 (9): Bad file descriptor 00:29:57.471 [2024-07-25 23:35:54.966254] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.471 [2024-07-25 23:35:54.966271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:57.471 [2024-07-25 23:35:54.966285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.471 [2024-07-25 23:35:54.966304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.471 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:29:57.471 23:35:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:29:57.471 [2024-07-25 23:35:54.976009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:57.471 [2024-07-25 23:35:54.976185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-25 23:35:54.976214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9786e0 with addr=10.0.0.2, port=4420 00:29:57.471 [2024-07-25 23:35:54.976231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9786e0 is same with the state(5) to be set 00:29:57.471 [2024-07-25 23:35:54.976254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9786e0 (9): Bad file descriptor 00:29:57.471 [2024-07-25 23:35:54.976298] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.471 [2024-07-25 23:35:54.976318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:57.471 [2024-07-25 23:35:54.976332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.471 [2024-07-25 23:35:54.976352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.471 [2024-07-25 23:35:54.977283] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:57.471 [2024-07-25 23:35:54.977311] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:58.405 23:35:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:58.405 23:35:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:58.405 23:35:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:29:58.405 23:35:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:58.405 23:35:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.405 23:35:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:58.405 23:35:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.405 23:35:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:58.405 23:35:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:58.405 23:35:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.405 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.406 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.665 23:35:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.600 [2024-07-25 23:35:57.267266] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:59.600 [2024-07-25 23:35:57.267291] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:59.600 [2024-07-25 23:35:57.267312] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:59.858 [2024-07-25 23:35:57.353600] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:59.858 [2024-07-25 23:35:57.421642] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:59.858 [2024-07-25 23:35:57.421683] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.858 request: 00:29:59.858 { 00:29:59.858 "name": "nvme", 00:29:59.858 "trtype": "tcp", 00:29:59.858 "traddr": "10.0.0.2", 00:29:59.858 "adrfam": "ipv4", 00:29:59.858 "trsvcid": "8009", 00:29:59.858 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:59.858 "wait_for_attach": true, 00:29:59.858 "method": "bdev_nvme_start_discovery", 00:29:59.858 "req_id": 1 00:29:59.858 } 00:29:59.858 Got JSON-RPC error response 00:29:59.858 response: 00:29:59.858 { 00:29:59.858 "code": -17, 00:29:59.858 "message": "File exists" 00:29:59.858 } 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:59.858 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.859 request: 00:29:59.859 { 00:29:59.859 "name": "nvme_second", 00:29:59.859 "trtype": "tcp", 00:29:59.859 "traddr": "10.0.0.2", 00:29:59.859 "adrfam": "ipv4", 00:29:59.859 "trsvcid": "8009", 00:29:59.859 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:59.859 "wait_for_attach": true, 00:29:59.859 "method": "bdev_nvme_start_discovery", 00:29:59.859 "req_id": 1 00:29:59.859 } 00:29:59.859 Got JSON-RPC error response 00:29:59.859 response: 00:29:59.859 { 00:29:59.859 "code": -17, 00:29:59.859 "message": "File exists" 00:29:59.859 } 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:59.859 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:00.118 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.118 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:00.118 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:00.118 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:00.119 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:00.119 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:00.119 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:00.119 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:00.119 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:00.119 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:00.119 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.119 23:35:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.056 [2024-07-25 23:35:58.617987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.056 [2024-07-25 23:35:58.618051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976430 with addr=10.0.0.2, port=8010 00:30:01.056 [2024-07-25 23:35:58.618085] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:01.056 [2024-07-25 23:35:58.618126] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:01.056 [2024-07-25 23:35:58.618138] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:01.995 [2024-07-25 23:35:59.620611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.995 [2024-07-25 23:35:59.620691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x976430 with addr=10.0.0.2, port=8010 00:30:01.995 [2024-07-25 23:35:59.620724] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:01.995 [2024-07-25 23:35:59.620767] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:01.995 [2024-07-25 23:35:59.620783] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:02.933 [2024-07-25 23:36:00.622697] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:02.933 request: 00:30:02.933 { 00:30:02.933 "name": "nvme_second", 00:30:02.933 "trtype": "tcp", 00:30:02.933 "traddr": "10.0.0.2", 00:30:02.933 "adrfam": "ipv4", 00:30:02.933 "trsvcid": "8010", 00:30:02.933 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:02.933 "wait_for_attach": false, 00:30:02.933 "attach_timeout_ms": 3000, 00:30:02.933 "method": "bdev_nvme_start_discovery", 00:30:02.933 "req_id": 1 00:30:02.933 } 00:30:02.933 Got JSON-RPC error response 00:30:02.933 response: 00:30:02.933 { 00:30:02.933 "code": -110, 00:30:02.933 "message": "Connection timed out" 00:30:02.933 } 00:30:02.934 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:02.934 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:02.934 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:02.934 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:02.934 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:02.934 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:02.934 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:02.934 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:02.934 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.934 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:02.934 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:02.934 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:02.934 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1500190 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:03.193 rmmod nvme_tcp 00:30:03.193 rmmod nvme_fabrics 00:30:03.193 rmmod nvme_keyring 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1500045 ']' 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1500045 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1500045 ']' 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1500045 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1500045 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1500045' 00:30:03.193 killing process with pid 1500045 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1500045 00:30:03.193 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1500045 00:30:03.452 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:03.452 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:03.452 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:03.452 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:03.452 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:03.452 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.452 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.452 23:36:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.359 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:05.359 00:30:05.359 real 0m14.046s 00:30:05.359 user 0m20.867s 00:30:05.359 sys 0m2.859s 00:30:05.359 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:05.359 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:05.359 ************************************ 00:30:05.359 END TEST nvmf_host_discovery 00:30:05.359 ************************************ 00:30:05.359 23:36:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:05.359 23:36:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:05.359 23:36:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:05.359 23:36:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.359 ************************************ 00:30:05.359 START TEST nvmf_host_multipath_status 00:30:05.359 ************************************ 00:30:05.359 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:05.616 * Looking for test storage... 00:30:05.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:05.616 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.616 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:05.616 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.616 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.616 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.616 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.616 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.616 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.616 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.616 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.616 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.616 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:05.617 23:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:07.554 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:07.555 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:07.555 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:07.555 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:07.555 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:07.555 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:07.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:07.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:30:07.556 00:30:07.556 --- 10.0.0.2 ping statistics --- 00:30:07.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.556 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:07.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:07.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:30:07.556 00:30:07.556 --- 10.0.0.1 ping statistics --- 00:30:07.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.556 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1503460 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1503460 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1503460 ']' 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:07.556 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:07.556 [2024-07-25 23:36:05.231126] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:30:07.556 [2024-07-25 23:36:05.231223] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.556 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.556 [2024-07-25 23:36:05.275132] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:07.815 [2024-07-25 23:36:05.304030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:07.815 [2024-07-25 23:36:05.401690] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.815 [2024-07-25 23:36:05.401745] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.815 [2024-07-25 23:36:05.401774] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.815 [2024-07-25 23:36:05.401787] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.815 [2024-07-25 23:36:05.401797] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.815 [2024-07-25 23:36:05.403084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.815 [2024-07-25 23:36:05.403094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.815 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:07.815 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:07.815 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:07.815 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:07.815 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:08.073 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.073 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1503460 00:30:08.073 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:08.331 [2024-07-25 23:36:05.821636] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.331 23:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:08.589 Malloc0 00:30:08.589 23:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:08.847 23:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:09.105 23:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:09.363 [2024-07-25 23:36:06.956683] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.363 23:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:09.621 [2024-07-25 23:36:07.201299] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:09.621 23:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1503634 00:30:09.621 23:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:09.621 23:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:09.621 23:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1503634 /var/tmp/bdevperf.sock 00:30:09.621 23:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1503634 ']' 00:30:09.621 23:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:09.621 23:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:09.621 23:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:09.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:09.621 23:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:09.621 23:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:09.880 23:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:09.880 23:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:30:09.880 23:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:10.138 23:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:10.704 Nvme0n1 00:30:10.704 23:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:11.272 Nvme0n1 00:30:11.272 23:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:11.272 23:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:13.173 23:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:13.173 23:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:13.431 23:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:13.689 23:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:15.062 23:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:15.062 23:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:15.063 23:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.063 23:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:15.063 23:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:15.063 23:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:15.063 23:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.063 23:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:15.321 23:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:15.321 23:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:15.321 23:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.321 23:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:15.579 23:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:15.579 23:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:15.579 23:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.579 23:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:15.853 23:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:15.853 23:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:15.853 23:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:15.853 23:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:16.116 23:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.116 23:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:16.116 23:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.116 23:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:16.373 23:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.373 23:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:16.373 23:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:16.629 23:36:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:16.887 23:36:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:17.822 23:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:17.822 23:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:17.822 23:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:17.822 23:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:18.079 23:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:18.079 23:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:18.079 23:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.079 23:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:18.337 23:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.337 23:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:18.337 23:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.337 23:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:18.594 23:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.594 23:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:18.594 23:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.594 23:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:18.851 23:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:18.852 23:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:18.852 23:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:18.852 23:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:19.109 23:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.109 23:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:19.109 23:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.109 23:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:19.366 23:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.366 23:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:19.366 23:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:19.624 23:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:19.883 23:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:20.818 23:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:20.818 23:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:20.818 23:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.818 23:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:21.076 23:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.076 23:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:21.076 23:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.076 23:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:21.333 23:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:21.333 23:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:21.333 23:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.333 23:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:21.591 23:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.591 23:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:21.591 23:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.591 23:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:21.848 23:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:21.848 23:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:21.848 23:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:21.848 23:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:22.106 23:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.106 23:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:22.106 23:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.106 23:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:22.364 23:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.364 23:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:22.364 23:36:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:22.621 23:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:22.883 23:36:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:23.862 23:36:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:23.862 23:36:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:23.862 23:36:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.862 23:36:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:24.120 23:36:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.120 23:36:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:24.120 23:36:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.120 23:36:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:24.377 23:36:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:24.377 23:36:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:24.377 23:36:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.377 23:36:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:24.635 23:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.635 23:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:24.635 23:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.635 23:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:24.893 23:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:24.893 23:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:24.893 23:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.893 23:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:25.151 23:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.151 23:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:25.151 23:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.151 23:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:25.409 23:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:25.409 23:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:25.410 23:36:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:25.668 23:36:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:25.924 23:36:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:26.859 23:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:26.859 23:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:26.859 23:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:26.859 23:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:27.118 23:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:27.118 23:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:27.118 23:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.118 23:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:27.376 23:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:27.376 23:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:27.376 23:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.376 23:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:27.634 23:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:27.634 23:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:27.634 23:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.634 23:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:27.892 23:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:27.892 23:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:27.892 23:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.892 23:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:28.149 23:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:28.149 23:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:28.149 23:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.149 23:36:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:28.407 23:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:28.407 23:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:28.407 23:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:28.664 23:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:28.923 23:36:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:29.861 23:36:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:29.861 23:36:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:29.861 23:36:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:29.861 23:36:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:30.119 23:36:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:30.119 23:36:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:30.119 23:36:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.119 23:36:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:30.377 23:36:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.377 23:36:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:30.377 23:36:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.377 23:36:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:30.635 23:36:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.635 23:36:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:30.635 23:36:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.635 23:36:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:30.893 23:36:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.893 23:36:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:30.893 23:36:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.893 23:36:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:31.151 23:36:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:31.151 23:36:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:31.151 23:36:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.151 23:36:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:31.409 23:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.409 23:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:31.667 23:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:31.667 23:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:31.925 23:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:32.183 23:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:33.560 23:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:33.560 23:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:33.560 23:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.560 23:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:33.560 23:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.560 23:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:33.560 23:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.560 23:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:33.818 23:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.818 23:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:33.818 23:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.818 23:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:34.076 23:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.076 23:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:34.076 23:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.076 23:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:34.333 23:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.333 23:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:34.333 23:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.333 23:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:34.589 23:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.589 23:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:34.589 23:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:34.589 23:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:34.846 23:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.846 23:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:34.846 23:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:35.103 23:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:35.362 23:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:36.295 23:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:36.295 23:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:36.295 23:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.295 23:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:36.552 23:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:36.552 23:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:36.552 23:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.552 23:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:36.810 23:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.810 23:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:36.810 23:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.810 23:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:37.067 23:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.067 23:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:37.067 23:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.067 23:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:37.324 23:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.324 23:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:37.324 23:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.324 23:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:37.581 23:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.581 23:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:37.581 23:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:37.581 23:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:37.839 23:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:37.839 23:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:37.839 23:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:38.095 23:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:38.355 23:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:39.317 23:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:39.317 23:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:39.317 23:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.317 23:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:39.575 23:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.575 23:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:39.575 23:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.575 23:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:39.832 23:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.832 23:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:39.832 23:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.832 23:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:40.090 23:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.090 23:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:40.090 23:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.090 23:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:40.347 23:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.347 23:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:40.347 23:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.347 23:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:40.604 23:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.604 23:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:40.604 23:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.604 23:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:40.862 23:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.862 23:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:40.862 23:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:41.120 23:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:41.379 23:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:42.314 23:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:42.314 23:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:42.314 23:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.314 23:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:42.572 23:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:42.572 23:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:42.572 23:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.572 23:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:42.830 23:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:42.830 23:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:42.830 23:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:42.830 23:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:43.087 23:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.087 23:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:43.087 23:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.087 23:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:43.345 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.345 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:43.345 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.345 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:43.602 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:43.602 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:43.602 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.602 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:43.860 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:43.860 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1503634 00:30:43.860 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1503634 ']' 00:30:43.860 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1503634 00:30:43.860 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:43.860 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:43.860 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1503634 00:30:43.860 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:30:43.860 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:30:43.860 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1503634' 00:30:43.860 killing process with pid 1503634 00:30:43.860 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1503634 00:30:43.861 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1503634 00:30:44.139 Connection closed with partial response: 00:30:44.139 00:30:44.139 00:30:44.139 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1503634 00:30:44.140 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:44.140 [2024-07-25 23:36:07.264122] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:30:44.140 [2024-07-25 23:36:07.264212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503634 ] 00:30:44.140 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.140 [2024-07-25 23:36:07.300537] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:44.140 [2024-07-25 23:36:07.328819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.140 [2024-07-25 23:36:07.420697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:44.140 Running I/O for 90 seconds... 00:30:44.140 [2024-07-25 23:36:23.233721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.140 [2024-07-25 23:36:23.233786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.233836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.233854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.233877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.233895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.233917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.233933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.233955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.233971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.233993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.234009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.234031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.234070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.234098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.234115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.235491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.235517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.235561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.235580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.235616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.235633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.235656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.235672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.235695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.235711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.235733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.235749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.235771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.235787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.235810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.235826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.235848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.140 [2024-07-25 23:36:23.235864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.235886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.235903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.235925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.235940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.235963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.235979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.236001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.236018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.236040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.236056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.236089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.236111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.236136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.236152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.236175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.236191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.236213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.236230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.236252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.236269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.236291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.140 [2024-07-25 23:36:23.236307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:44.140 [2024-07-25 23:36:23.236330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.236346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.236368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.236384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.236406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.236423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.236445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.236461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.236484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.236500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.236523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.236539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.236561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.236581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.236604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.236621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.236644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.236660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.236683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.236699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.236722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.236738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.236761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.236777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.237356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.237380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.237407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.237425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.237449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.237465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.237487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.237503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.237526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.237543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.237566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.237582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.237604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.237620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.237648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.237665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.237687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.237703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.237725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.237742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.237765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.237781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.237803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.237819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.237857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.237874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.237896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.237911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.237933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.237949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.237971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.237986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.238008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.238023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.238045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.238082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.238110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.238126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:44.141 [2024-07-25 23:36:23.238153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.141 [2024-07-25 23:36:23.238170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.238974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.238991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.239013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.239029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.239052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.239080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.239106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.239127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.239151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.239168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.239190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.239207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.239230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.239246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.239269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.239285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.239308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.239324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.239347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.239363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:44.142 [2024-07-25 23:36:23.239401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.142 [2024-07-25 23:36:23.239417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.239439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.239454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.239476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.239492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.239514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.239530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.239551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.239567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.239589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.239605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.239631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.239647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.239669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.239685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.239707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.239723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.239746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.239762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.239784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.239800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.239822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.239838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.239860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.239876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.239898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.239914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.239936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.239952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.239974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.239989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.240012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.143 [2024-07-25 23:36:23.240027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.240050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.240087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.240118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.240136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.240158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.240175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.240198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.240215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.240237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.240254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.240276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.240293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.240316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.240332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.241175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.241198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.241226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.241244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.241267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.241284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.241307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.241323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.241346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.241362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.241384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.241401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.241423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.241444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.241467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.241483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:44.143 [2024-07-25 23:36:23.241506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.143 [2024-07-25 23:36:23.241522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.241545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.241561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.241583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.241600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.241622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.241638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.241661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.241677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.241700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.241716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.241738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.241754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.241777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.241793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.241815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.241831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.241854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.241870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.241892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.241916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.241939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.241971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.241994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.144 [2024-07-25 23:36:23.242437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:44.144 [2024-07-25 23:36:23.242765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.144 [2024-07-25 23:36:23.242781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.242803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.242818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.242841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.242856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.242878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.242894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.242935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.242953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.242976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.242992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.243015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.243031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.243053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.243093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.243124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.243141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.243164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.243180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.243211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.243230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.243252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.243268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.243291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.243307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.243330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.243346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.243973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.243997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.244025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.244044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.244082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.244108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.244133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.244150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.244173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.244190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.244213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.244229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.244251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.244267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.244290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.244306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.244329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.244345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.244368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.244384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.244406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.244437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:44.145 [2024-07-25 23:36:23.244460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.145 [2024-07-25 23:36:23.244476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.244498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.244513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.244535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.244550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.244572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.244592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.244614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.244630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.244652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.244668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.244689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.244705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.244727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.244742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.244764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.244779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.244801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.244817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.244855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.244871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.244894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.244911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.244933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.244950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.244972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.244988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.146 [2024-07-25 23:36:23.245711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:44.146 [2024-07-25 23:36:23.245734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.245750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.245773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.245790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.245814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.245831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.245853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.245870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.245893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.245909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.245947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.245963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.245985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.147 [2024-07-25 23:36:23.246742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.246951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.246968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:44.147 [2024-07-25 23:36:23.247813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.147 [2024-07-25 23:36:23.247837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.247865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.247883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.247912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.247929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.247952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.247968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.247991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.248981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.248997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.249020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.148 [2024-07-25 23:36:23.249036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.249066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.148 [2024-07-25 23:36:23.249084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:44.148 [2024-07-25 23:36:23.249107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.249896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.249912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.250505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.250528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.250556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.250574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.250597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.250613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.250636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.250652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.250675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.250691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.250714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.250730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.250753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.250769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.250792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.250809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.250831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.250847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.250870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.250887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.250914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.149 [2024-07-25 23:36:23.250930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:44.149 [2024-07-25 23:36:23.250953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.250969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.250991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.251946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.251984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.252000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.252022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.252053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.252086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.150 [2024-07-25 23:36:23.252104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:44.150 [2024-07-25 23:36:23.252128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.252972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.252993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.253009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.253031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.253071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.253097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.253114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.253137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.253154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.253177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.253194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.253217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.253234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.253257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.253273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.253296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.151 [2024-07-25 23:36:23.253314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.253351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.151 [2024-07-25 23:36:23.253367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:44.151 [2024-07-25 23:36:23.253390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.253406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.253429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.253449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.253471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.253487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.253509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.253525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.254416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.254440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.254468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.254486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.254509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.254525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.254548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.254564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.254587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.254603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.254626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.254642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.254665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.254681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.254704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.254722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.254744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.254776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.254798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.254818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.254841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.254857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.254878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.254895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.254916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.254932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.254969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.254984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.255006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.255022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.255068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.255087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.255111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.255128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.255151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.255168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.255192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.255208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.255231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.255247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.255270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.255287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.255310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.255326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.255353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.255384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.255407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.255422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.255443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.255458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.255479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.255494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.255515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.255531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.255552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.152 [2024-07-25 23:36:23.255566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:44.152 [2024-07-25 23:36:23.255588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.255602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.255623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.255638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.255659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.255674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.255695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.255710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.255731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.153 [2024-07-25 23:36:23.255746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.255767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.255782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.255806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.255821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.255842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.255857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.255878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.255893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.255914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.255929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.255950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.255966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.255987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.256002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.256023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.256038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.256087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.256104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.256127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.256144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.256166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.256183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.256205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.256221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.256244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.256260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.256283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.256303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.256327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.256344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.256366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.256383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.256405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.256422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.256444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.256460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.256483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.256499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.256522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.256538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.257157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.257180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.257207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.257224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.257248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.257264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.257287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.257304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.257326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.257343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.257366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.153 [2024-07-25 23:36:23.257386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:44.153 [2024-07-25 23:36:23.257410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.257426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.257449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.257465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.257503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.257519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.257540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.257570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.257592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.257606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.257627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.257642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.257663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.257678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.257699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.257714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.257735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.257750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.257771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.257786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.257807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.257822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.257843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.257858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.257883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.257899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.257920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.257935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.257956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.257972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.257992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.258007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.258028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.258065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.258092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.258109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.258131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.258148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.258170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.258186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.258208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.258224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.258247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.258263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.258285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.258302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.258324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.258340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.258382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.258399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.258421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.258437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.258459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.258474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.258496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.258511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.258533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.258548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.258570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.258585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.258607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.258622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:44.154 [2024-07-25 23:36:23.258644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.154 [2024-07-25 23:36:23.258660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.258681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.258697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.258718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.258734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.258755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.258771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.258793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.258808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.258830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.258849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.258871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.258887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.258909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.258925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.258946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.258962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.258984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.258999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.155 [2024-07-25 23:36:23.259890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.155 [2024-07-25 23:36:23.259911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.156 [2024-07-25 23:36:23.259926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.259947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.259962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.259984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.259999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.260020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.260035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.260083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.260102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.260963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:44.156 [2024-07-25 23:36:23.261891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.156 [2024-07-25 23:36:23.261907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.261929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.261945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.261966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.261982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.157 [2024-07-25 23:36:23.262374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.262964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.262986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.263003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.263025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.263041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.263082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.263101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.263125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.263142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.263756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.157 [2024-07-25 23:36:23.263779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:44.157 [2024-07-25 23:36:23.263811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.263831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.263854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.263871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.263894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.263910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.263932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.263947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.263970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.263986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.264962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.264977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.265000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.265015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:44.158 [2024-07-25 23:36:23.265037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.158 [2024-07-25 23:36:23.265080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.265966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.265981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.266002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.266017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.266054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.266080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.266106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.266123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.266146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.266162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.266184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.266200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.266223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.266239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.266262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.266282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.266305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.159 [2024-07-25 23:36:23.266321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:44.159 [2024-07-25 23:36:23.266344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.266379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.266402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.266433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.266456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.266471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.266492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.266507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.266527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.266543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.266563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.266578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.266600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.160 [2024-07-25 23:36:23.266615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.266636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.266652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.266673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.266688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.266709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.266724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.267569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.267610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.267640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.267658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.267681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.267697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.267719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.267735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.267758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.267774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.267796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.267812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.267834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.267850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.267872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.267888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.267911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.267927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.267949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.267965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.267988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.268004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.268041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.268057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.268102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.268120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.268147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.268163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.268186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.268202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.268224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.268240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.268262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.268278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.268301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.268317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.268339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.268355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.268377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.268393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.268432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.268447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.268469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.160 [2024-07-25 23:36:23.268484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:44.160 [2024-07-25 23:36:23.268505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.268521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.268543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.268558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.268580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.268595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.268621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.268637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.268659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.268674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.268696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.268711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.268733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.268749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.268771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.268786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.268808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.268823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.268846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.268861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.268883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.268899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.268920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.268936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.268958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.161 [2024-07-25 23:36:23.268973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.268995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.269010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.269032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.269073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.269100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.269121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.269145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.269161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.269184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.269200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.269223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.269239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.269262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.269278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.269300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.269316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.269353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.269372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.269396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.269412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.269434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.269450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.269471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.269488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.269527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.269544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.269568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.161 [2024-07-25 23:36:23.269584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:44.161 [2024-07-25 23:36:23.269607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.269627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.269651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.269668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.269691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.269708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.269731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.269747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.270365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.270389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.270416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.270435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.270458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.270475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.270498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.270514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.270537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.270553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.270576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.270593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.270615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.270633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.270656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.270674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.270713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.270729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.270774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.270790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.270812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.270827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.270849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.270864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.270886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.270902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.270923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.270939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.270960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.270975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.270997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.271012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.271033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.271072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.271097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.271131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.271155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.271171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.271194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.271210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.271233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.271249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.271276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.271293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.271316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.271333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.271372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.271388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.271425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.271441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.271462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.271477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.271498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.271528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:44.162 [2024-07-25 23:36:23.271552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.162 [2024-07-25 23:36:23.271568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.271591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.271606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.271627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.271643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.271666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.271682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.271704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.271719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.271741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.271756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.271778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.271797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.271820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.271835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.271857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.271873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.271895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.271911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.271933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.271948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.271970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.271986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.163 [2024-07-25 23:36:23.272780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:44.163 [2024-07-25 23:36:23.272801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.272817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.272839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.272854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.272875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.272890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.272912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.272927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.272948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.272963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.272984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.272999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.273020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.273035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.273083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.273101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.273125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.273141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.273163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.273179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.273202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.273219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.273246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.273262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.273285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.164 [2024-07-25 23:36:23.273302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.273325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.273358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.273381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.273397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.274263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.274309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.274348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.274387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.274427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.274465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.274504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.274542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.274585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.274625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.274678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.274717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.274754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.274791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.274828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.274865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.164 [2024-07-25 23:36:23.274902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:44.164 [2024-07-25 23:36:23.274925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.274940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.274963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.274978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.165 [2024-07-25 23:36:23.275695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.275965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.275987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.276019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.276043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.276071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.276102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.276121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.276144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.276161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.276183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.276200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:44.165 [2024-07-25 23:36:23.276222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.165 [2024-07-25 23:36:23.276238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.276261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.276278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.276301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.276318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.276340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.276357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.276381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.276397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.277977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.277993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.278015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.278031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.278053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.278092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:44.166 [2024-07-25 23:36:23.278118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.166 [2024-07-25 23:36:23.278135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.278965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.278987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.279002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.279023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.279039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.279068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.279086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.279108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.279124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.279146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.279166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.279188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.279204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:44.167 [2024-07-25 23:36:23.279225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.167 [2024-07-25 23:36:23.279241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.279278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.279315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.279352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.279390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.279428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.279465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.279503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.279540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.279577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.279620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.279659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.279697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.279735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.279771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.279809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.279845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.168 [2024-07-25 23:36:23.279883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.279905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.279922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.280761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.280785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.280812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.280830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.280853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.280869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.280892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.280908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.280936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.280953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.280976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.280992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.281015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.281031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.281054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.281080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.281104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.281121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.281144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.281176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.281199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.281215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.281237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.281253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.281275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.168 [2024-07-25 23:36:23.281290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:44.168 [2024-07-25 23:36:23.281312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.281978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.281999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.282013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.282035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.282073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.282098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.282115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.282137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.282153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.282175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.282191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.282214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.169 [2024-07-25 23:36:23.282230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.282252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.282268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.282290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.282306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.282328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.282343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.282380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.282400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.282422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.282438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.282460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.282493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.282516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.282532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.282553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.169 [2024-07-25 23:36:23.282568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:44.169 [2024-07-25 23:36:23.282590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.282623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.282646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.282663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.282686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.282702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.282725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.282741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.282763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.282779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.282802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.282819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.282841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.282857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.282880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.282896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.283520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.283546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.283574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.283592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.283615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.283632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.283655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.283672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.283695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.283711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.283734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.283751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.283774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.283791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.283829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.283846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.283869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.283900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.283922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.283938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.283959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.283975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.283996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.284011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.284037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.284079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.284120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.284138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.284162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.284180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.284203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.284219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.284242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.284258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.284281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.284297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.284319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.284335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.284373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.284389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.284411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.284426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.284448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.284465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.284486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.284503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:44.170 [2024-07-25 23:36:23.284525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.170 [2024-07-25 23:36:23.284540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.284562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.284581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.284604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.284619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.284641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.284672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.284696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.284712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.284734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.284751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.284774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.284790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.284813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.284829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.284851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.284867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.284890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.284906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.284929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.284946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.284969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.284985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.171 [2024-07-25 23:36:23.285788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:44.171 [2024-07-25 23:36:23.285809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.285824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.285845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.285860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.285882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.285897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.285918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.285933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.285955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.285970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.285991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.286006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.286027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.286069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.286096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.286128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.286152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.286169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.286192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.286208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.286230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.286247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.286269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.286285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.286308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.286325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.286347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.286380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.286402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.286433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.286455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.286470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.286492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.286507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.286528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.286543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.286565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.172 [2024-07-25 23:36:23.286584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.287431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.287454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.287498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.287516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.287539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.287555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.287578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.287595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.287617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.287633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.287656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.287672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.287694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.287710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.287732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.287748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.287771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.172 [2024-07-25 23:36:23.287787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:44.172 [2024-07-25 23:36:23.287810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.287826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.287863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.287879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.287902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.287917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.287944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.287961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.287982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.287999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.173 [2024-07-25 23:36:23.288955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.288977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.288993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.289015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.289031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.289052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.173 [2024-07-25 23:36:23.289075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.173 [2024-07-25 23:36:23.289099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.289115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.289137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.289153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.289190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.289207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.289231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.289248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.289270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.289287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.289309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.289325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.289348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.289364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.289387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.289409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.289433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.289450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.289472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.289489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.289511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.289527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.289550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.289567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.290973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:44.174 [2024-07-25 23:36:23.290995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.174 [2024-07-25 23:36:23.291011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.291966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.291988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.292004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.292026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.292045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.292090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.292109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.292133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.292150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.292172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.292188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.292211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.292227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.292265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.292281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.292302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.175 [2024-07-25 23:36:23.292318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:44.175 [2024-07-25 23:36:23.292340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.292392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.292428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.292465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.292500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.292537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.292577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.292613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.292650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.292686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.292723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.292759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.292796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.292832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.292868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.292905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.292941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.292977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.292992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.293017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.293033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.293054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.293092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.293116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.293132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.293155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.293171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.293193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.293208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.293495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.176 [2024-07-25 23:36:23.293518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.293580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.293602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.293631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.293649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.293678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.293695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.293723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.293740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.293768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.293785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.293813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.293830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.293874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.293894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.293923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.176 [2024-07-25 23:36:23.293939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:44.176 [2024-07-25 23:36:23.293968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.293984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.294979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.294995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.295025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.295056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.295095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.295128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.295158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.295175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.295204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.295220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.295249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.295266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.295295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.295312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.295355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.177 [2024-07-25 23:36:23.295372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.295401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.295432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.295460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.177 [2024-07-25 23:36:23.295475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:44.177 [2024-07-25 23:36:23.295502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.178 [2024-07-25 23:36:23.295518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:23.295545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.178 [2024-07-25 23:36:23.295560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:23.295587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.178 [2024-07-25 23:36:23.295603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:23.295634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.178 [2024-07-25 23:36:23.295650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:23.295676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.178 [2024-07-25 23:36:23.295692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:23.295718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.178 [2024-07-25 23:36:23.295734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:23.295760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.178 [2024-07-25 23:36:23.295776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:23.295802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.178 [2024-07-25 23:36:23.295818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:23.295845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.178 [2024-07-25 23:36:23.295861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:23.295888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.178 [2024-07-25 23:36:23.295903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:23.295931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.178 [2024-07-25 23:36:23.295947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:23.295974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.178 [2024-07-25 23:36:23.295990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:23.296170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.178 [2024-07-25 23:36:23.296193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:38.974823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-07-25 23:36:38.974884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:38.974952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-07-25 23:36:38.974973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:38.974997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.178 [2024-07-25 23:36:38.975023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:38.975082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.178 [2024-07-25 23:36:38.975102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:38.975141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.178 [2024-07-25 23:36:38.975160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:38.975184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.178 [2024-07-25 23:36:38.975201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:38.975225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-07-25 23:36:38.975242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:38.975265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-07-25 23:36:38.975281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:38.975304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-07-25 23:36:38.975320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:38.975343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-07-25 23:36:38.975359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:38.975383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-07-25 23:36:38.975400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:38.975448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-07-25 23:36:38.975464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:38.975487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-07-25 23:36:38.975503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:38.975525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-07-25 23:36:38.975542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:44.178 [2024-07-25 23:36:38.975564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.178 [2024-07-25 23:36:38.975584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.975608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.975624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.975647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.975663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.975685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.975701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.975723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.975739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.975763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.975779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.977072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.977101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.977138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.179 [2024-07-25 23:36:38.977156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.977180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.179 [2024-07-25 23:36:38.977197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.977220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.179 [2024-07-25 23:36:38.977237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.977260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.179 [2024-07-25 23:36:38.977277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.977300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.179 [2024-07-25 23:36:38.977316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.977339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.179 [2024-07-25 23:36:38.977361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.977386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.179 [2024-07-25 23:36:38.977419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.978704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.179 [2024-07-25 23:36:38.978731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.978760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.179 [2024-07-25 23:36:38.978778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.978803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.179 [2024-07-25 23:36:38.978820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.978843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.179 [2024-07-25 23:36:38.978859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.978882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.179 [2024-07-25 23:36:38.978899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.978922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.978939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.978963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.978980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.979003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.979034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.979064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.979099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.979128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.979146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.979169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.979186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.979214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.979233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.979256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.979273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.979295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.979312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.979335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.979351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.979374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.979406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.979429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.179 [2024-07-25 23:36:38.979445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.979467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.179 [2024-07-25 23:36:38.979483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:44.179 [2024-07-25 23:36:38.979506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.179 [2024-07-25 23:36:38.979521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:44.180 [2024-07-25 23:36:38.979544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.180 [2024-07-25 23:36:38.979560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:44.180 [2024-07-25 23:36:38.979582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.180 [2024-07-25 23:36:38.979598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:44.180 [2024-07-25 23:36:38.979620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.180 [2024-07-25 23:36:38.979636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:44.180 [2024-07-25 23:36:38.979659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.180 [2024-07-25 23:36:38.979675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:44.180 [2024-07-25 23:36:38.979704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.180 [2024-07-25 23:36:38.979721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:44.180 [2024-07-25 23:36:38.979743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:44.180 [2024-07-25 23:36:38.979759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:44.180 [2024-07-25 23:36:38.979782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.180 [2024-07-25 23:36:38.979799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:44.180 [2024-07-25 23:36:38.979821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.180 [2024-07-25 23:36:38.979837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:44.180 [2024-07-25 23:36:38.979860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.180 [2024-07-25 23:36:38.979876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:44.180 Received shutdown signal, test time was about 32.519872 seconds 00:30:44.180 00:30:44.180 Latency(us) 00:30:44.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.180 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:44.180 Verification LBA range: start 0x0 length 0x4000 00:30:44.180 Nvme0n1 : 32.52 8006.11 31.27 0.00 0.00 15959.70 628.05 4076242.11 00:30:44.180 =================================================================================================================== 00:30:44.180 Total : 8006.11 31.27 0.00 0.00 15959.70 628.05 4076242.11 00:30:44.180 23:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:44.439 rmmod nvme_tcp 00:30:44.439 rmmod nvme_fabrics 00:30:44.439 rmmod nvme_keyring 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1503460 ']' 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1503460 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1503460 ']' 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1503460 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1503460 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1503460' 00:30:44.439 killing process with pid 1503460 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1503460 00:30:44.439 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1503460 00:30:44.698 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:44.698 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:44.698 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:44.698 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:44.698 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:44.698 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.698 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.698 23:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:47.232 00:30:47.232 real 0m41.384s 00:30:47.232 user 2m4.635s 00:30:47.232 sys 0m10.833s 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:47.232 ************************************ 00:30:47.232 END TEST nvmf_host_multipath_status 00:30:47.232 ************************************ 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.232 ************************************ 00:30:47.232 START TEST nvmf_discovery_remove_ifc 00:30:47.232 ************************************ 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:47.232 * Looking for test storage... 00:30:47.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:47.232 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:30:47.233 23:36:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:49.132 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.132 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:49.133 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:49.133 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:49.133 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:49.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:30:49.133 00:30:49.133 --- 10.0.0.2 ping statistics --- 00:30:49.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.133 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:30:49.133 00:30:49.133 --- 10.0.0.1 ping statistics --- 00:30:49.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.133 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1510322 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1510322 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1510322 ']' 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:49.133 23:36:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.133 [2024-07-25 23:36:46.769826] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:30:49.134 [2024-07-25 23:36:46.769915] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.134 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.134 [2024-07-25 23:36:46.809591] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:49.134 [2024-07-25 23:36:46.839148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.393 [2024-07-25 23:36:46.928474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.394 [2024-07-25 23:36:46.928538] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.394 [2024-07-25 23:36:46.928552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:49.394 [2024-07-25 23:36:46.928570] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:49.394 [2024-07-25 23:36:46.928601] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.394 [2024-07-25 23:36:46.928632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.394 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:49.394 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:30:49.394 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:49.394 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:49.394 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.394 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.394 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:49.394 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.394 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.394 [2024-07-25 23:36:47.078578] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.394 [2024-07-25 23:36:47.086795] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:49.394 null0 00:30:49.653 [2024-07-25 23:36:47.118716] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.653 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.653 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1510462 00:30:49.653 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:49.653 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1510462 /tmp/host.sock 00:30:49.653 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1510462 ']' 00:30:49.653 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:30:49.653 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:49.653 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:49.653 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:49.653 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:49.653 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.653 [2024-07-25 23:36:47.184323] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:30:49.653 [2024-07-25 23:36:47.184404] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510462 ] 00:30:49.653 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.653 [2024-07-25 23:36:47.216430] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:49.653 [2024-07-25 23:36:47.246591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.653 [2024-07-25 23:36:47.337260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.911 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:49.911 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:30:49.911 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:49.911 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:49.911 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.911 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.911 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.911 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:49.911 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.911 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.911 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.911 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:49.911 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.911 23:36:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:51.288 [2024-07-25 23:36:48.597792] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:51.288 [2024-07-25 23:36:48.597819] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:51.288 [2024-07-25 23:36:48.597845] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:51.288 [2024-07-25 23:36:48.728285] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:51.288 [2024-07-25 23:36:48.910214] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:51.288 [2024-07-25 23:36:48.910270] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:51.288 [2024-07-25 23:36:48.910306] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:51.288 [2024-07-25 23:36:48.910326] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:51.288 [2024-07-25 23:36:48.910366] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:51.288 23:36:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.288 23:36:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:51.288 23:36:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:51.288 23:36:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.288 23:36:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.288 23:36:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:51.288 23:36:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:51.288 23:36:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:51.288 23:36:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:51.288 [2024-07-25 23:36:48.915002] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x88c370 was disconnected and freed. delete nvme_qpair. 00:30:51.288 23:36:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.288 23:36:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:51.288 23:36:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:51.288 23:36:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:51.288 23:36:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:51.288 23:36:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:51.288 23:36:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.288 23:36:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.288 23:36:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:51.288 23:36:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:51.288 23:36:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:51.288 23:36:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:51.547 23:36:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.547 23:36:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:51.547 23:36:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:52.550 23:36:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:52.550 23:36:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:52.550 23:36:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.550 23:36:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:52.550 23:36:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:52.550 23:36:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:52.550 23:36:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:52.550 23:36:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.550 23:36:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:52.550 23:36:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:53.486 23:36:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:53.486 23:36:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.486 23:36:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.486 23:36:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:53.486 23:36:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:53.486 23:36:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:53.486 23:36:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:53.486 23:36:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.486 23:36:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:53.486 23:36:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:54.420 23:36:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:54.420 23:36:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:54.420 23:36:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.420 23:36:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:54.420 23:36:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:54.420 23:36:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:54.420 23:36:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:54.678 23:36:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.678 23:36:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:54.678 23:36:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:55.616 23:36:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:55.616 23:36:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:55.616 23:36:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.616 23:36:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:55.616 23:36:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:55.616 23:36:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:55.616 23:36:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:55.616 23:36:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.616 23:36:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:55.616 23:36:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:56.551 23:36:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:56.551 23:36:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:56.551 23:36:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:56.551 23:36:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.551 23:36:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:56.551 23:36:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:56.551 23:36:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:56.551 23:36:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.551 23:36:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:56.551 23:36:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:56.808 [2024-07-25 23:36:54.351407] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:56.808 [2024-07-25 23:36:54.351476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.808 [2024-07-25 23:36:54.351500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.808 [2024-07-25 23:36:54.351521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.808 [2024-07-25 23:36:54.351536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.808 [2024-07-25 23:36:54.351551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.808 [2024-07-25 23:36:54.351566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.808 [2024-07-25 23:36:54.351590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.808 [2024-07-25 23:36:54.351607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.808 [2024-07-25 23:36:54.351624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.808 [2024-07-25 23:36:54.351639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.808 [2024-07-25 23:36:54.351654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x852d70 is same with the state(5) to be set 00:30:56.808 [2024-07-25 23:36:54.361429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x852d70 (9): Bad file descriptor 00:30:56.808 [2024-07-25 23:36:54.371474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:57.743 23:36:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:57.743 23:36:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:57.743 23:36:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:57.743 23:36:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.743 23:36:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:57.743 23:36:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:57.743 23:36:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:57.743 [2024-07-25 23:36:55.437091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:57.743 [2024-07-25 23:36:55.437146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x852d70 with addr=10.0.0.2, port=4420 00:30:57.743 [2024-07-25 23:36:55.437170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x852d70 is same with the state(5) to be set 00:30:57.743 [2024-07-25 23:36:55.437210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x852d70 (9): Bad file descriptor 00:30:57.743 [2024-07-25 23:36:55.437640] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:57.743 [2024-07-25 23:36:55.437686] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:57.743 [2024-07-25 23:36:55.437708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:57.743 [2024-07-25 23:36:55.437727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:57.743 [2024-07-25 23:36:55.437756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.743 [2024-07-25 23:36:55.437776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:57.743 23:36:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.743 23:36:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:57.743 23:36:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:59.115 [2024-07-25 23:36:56.440269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:59.115 [2024-07-25 23:36:56.440296] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:59.115 [2024-07-25 23:36:56.440319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:59.115 [2024-07-25 23:36:56.440331] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:30:59.115 [2024-07-25 23:36:56.440375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.115 [2024-07-25 23:36:56.440412] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:59.115 [2024-07-25 23:36:56.440450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:59.115 [2024-07-25 23:36:56.440473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.115 [2024-07-25 23:36:56.440493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:59.115 [2024-07-25 23:36:56.440510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.115 [2024-07-25 23:36:56.440526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:59.115 [2024-07-25 23:36:56.440542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.115 [2024-07-25 23:36:56.440558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:59.115 [2024-07-25 23:36:56.440574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.115 [2024-07-25 23:36:56.440591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:59.115 [2024-07-25 23:36:56.440605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:59.115 [2024-07-25 23:36:56.440620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:30:59.115 [2024-07-25 23:36:56.440897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x852210 (9): Bad file descriptor 00:30:59.115 [2024-07-25 23:36:56.441919] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:59.115 [2024-07-25 23:36:56.441945] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:30:59.115 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:59.115 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:59.115 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.115 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:59.115 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:59.115 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:59.115 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:59.116 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.116 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:59.116 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.116 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.116 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:59.116 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:59.116 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:59.116 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.116 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:59.116 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:59.116 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:59.116 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:59.116 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.116 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:59.116 23:36:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:00.051 23:36:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:00.051 23:36:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:00.051 23:36:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:00.051 23:36:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.051 23:36:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:00.051 23:36:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:00.051 23:36:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:00.051 23:36:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.051 23:36:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:00.051 23:36:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:00.987 [2024-07-25 23:36:58.492763] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:00.987 [2024-07-25 23:36:58.492813] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:00.987 [2024-07-25 23:36:58.492840] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:00.987 [2024-07-25 23:36:58.621249] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:00.987 23:36:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:00.987 23:36:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:00.987 23:36:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:00.987 23:36:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.987 23:36:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:00.987 23:36:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:00.987 23:36:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:00.987 23:36:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.987 23:36:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:00.987 23:36:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:00.987 [2024-07-25 23:36:58.684078] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:00.987 [2024-07-25 23:36:58.684153] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:00.987 [2024-07-25 23:36:58.684194] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:00.987 [2024-07-25 23:36:58.684214] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:00.987 [2024-07-25 23:36:58.684231] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:01.246 [2024-07-25 23:36:58.731656] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x895900 was disconnected and freed. delete nvme_qpair. 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1510462 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1510462 ']' 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1510462 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1510462 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1510462' 00:31:02.184 killing process with pid 1510462 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1510462 00:31:02.184 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1510462 00:31:02.444 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:02.445 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:02.445 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:02.445 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:02.445 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:02.445 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:02.445 23:36:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:02.445 rmmod nvme_tcp 00:31:02.445 rmmod nvme_fabrics 00:31:02.445 rmmod nvme_keyring 00:31:02.445 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:02.445 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:02.445 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:02.445 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1510322 ']' 00:31:02.445 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1510322 00:31:02.445 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1510322 ']' 00:31:02.445 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1510322 00:31:02.445 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:31:02.445 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:02.445 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1510322 00:31:02.445 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:02.445 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:02.445 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1510322' 00:31:02.445 killing process with pid 1510322 00:31:02.445 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1510322 00:31:02.445 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1510322 00:31:02.703 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:02.703 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:02.703 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:02.703 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:02.703 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:02.703 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.703 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:02.703 23:37:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:05.238 00:31:05.238 real 0m17.851s 00:31:05.238 user 0m25.905s 00:31:05.238 sys 0m3.083s 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:05.238 ************************************ 00:31:05.238 END TEST nvmf_discovery_remove_ifc 00:31:05.238 ************************************ 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.238 ************************************ 00:31:05.238 START TEST nvmf_identify_kernel_target 00:31:05.238 ************************************ 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:05.238 * Looking for test storage... 00:31:05.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.238 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:05.239 23:37:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:06.615 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:06.615 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:06.615 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:06.615 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:06.615 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:06.615 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:06.615 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:06.615 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:06.615 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:06.615 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:06.615 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:06.615 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:06.616 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:06.616 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:06.616 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:06.616 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.874 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:06.874 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:06.874 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.874 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:06.874 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:06.874 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.874 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:06.874 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:06.874 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:06.874 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:06.874 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:06.874 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:06.874 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:06.874 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:06.874 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:06.874 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:06.874 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:06.874 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:06.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:06.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:31:06.875 00:31:06.875 --- 10.0.0.2 ping statistics --- 00:31:06.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.875 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:06.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:06.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:31:06.875 00:31:06.875 --- 10.0.0.1 ping statistics --- 00:31:06.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.875 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:06.875 23:37:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:07.808 Waiting for block devices as requested 00:31:07.808 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:08.067 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:08.067 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:08.326 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:08.326 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:08.326 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:08.326 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:08.585 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:08.585 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:08.585 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:08.585 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:08.844 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:08.844 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:08.844 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:08.844 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:09.103 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:09.103 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:09.363 No valid GPT data, bailing 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:09.363 23:37:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:09.363 00:31:09.363 Discovery Log Number of Records 2, Generation counter 2 00:31:09.363 =====Discovery Log Entry 0====== 00:31:09.363 trtype: tcp 00:31:09.363 adrfam: ipv4 00:31:09.363 subtype: current discovery subsystem 00:31:09.363 treq: not specified, sq flow control disable supported 00:31:09.363 portid: 1 00:31:09.363 trsvcid: 4420 00:31:09.363 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:09.363 traddr: 10.0.0.1 00:31:09.363 eflags: none 00:31:09.363 sectype: none 00:31:09.363 =====Discovery Log Entry 1====== 00:31:09.363 trtype: tcp 00:31:09.363 adrfam: ipv4 00:31:09.363 subtype: nvme subsystem 00:31:09.363 treq: not specified, sq flow control disable supported 00:31:09.363 portid: 1 00:31:09.363 trsvcid: 4420 00:31:09.363 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:09.363 traddr: 10.0.0.1 00:31:09.363 eflags: none 00:31:09.363 sectype: none 00:31:09.363 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:09.363 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:09.363 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.623 ===================================================== 00:31:09.623 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:09.623 ===================================================== 00:31:09.623 Controller Capabilities/Features 00:31:09.623 ================================ 00:31:09.623 Vendor ID: 0000 00:31:09.623 Subsystem Vendor ID: 0000 00:31:09.623 Serial Number: ff062f0bdb7594f974c5 00:31:09.623 Model Number: Linux 00:31:09.623 Firmware Version: 6.7.0-68 00:31:09.623 Recommended Arb Burst: 0 00:31:09.623 IEEE OUI Identifier: 00 00 00 00:31:09.623 Multi-path I/O 00:31:09.623 May have multiple subsystem ports: No 00:31:09.623 May have multiple controllers: No 00:31:09.623 Associated with SR-IOV VF: No 00:31:09.623 Max Data Transfer Size: Unlimited 00:31:09.623 Max Number of Namespaces: 0 00:31:09.623 Max Number of I/O Queues: 1024 00:31:09.623 NVMe Specification Version (VS): 1.3 00:31:09.623 NVMe Specification Version (Identify): 1.3 00:31:09.623 Maximum Queue Entries: 1024 00:31:09.623 Contiguous Queues Required: No 00:31:09.623 Arbitration Mechanisms Supported 00:31:09.623 Weighted Round Robin: Not Supported 00:31:09.623 Vendor Specific: Not Supported 00:31:09.623 Reset Timeout: 7500 ms 00:31:09.623 Doorbell Stride: 4 bytes 00:31:09.623 NVM Subsystem Reset: Not Supported 00:31:09.623 Command Sets Supported 00:31:09.623 NVM Command Set: Supported 00:31:09.623 Boot Partition: Not Supported 00:31:09.623 Memory Page Size Minimum: 4096 bytes 00:31:09.623 Memory Page Size Maximum: 4096 bytes 00:31:09.623 Persistent Memory Region: Not Supported 00:31:09.624 Optional Asynchronous Events Supported 00:31:09.624 Namespace Attribute Notices: Not Supported 00:31:09.624 Firmware Activation Notices: Not Supported 00:31:09.624 ANA Change Notices: Not Supported 00:31:09.624 PLE Aggregate Log Change Notices: Not Supported 00:31:09.624 LBA Status Info Alert Notices: Not Supported 00:31:09.624 EGE Aggregate Log Change Notices: Not Supported 00:31:09.624 Normal NVM Subsystem Shutdown event: Not Supported 00:31:09.624 Zone Descriptor Change Notices: Not Supported 00:31:09.624 Discovery Log Change Notices: Supported 00:31:09.624 Controller Attributes 00:31:09.624 128-bit Host Identifier: Not Supported 00:31:09.624 Non-Operational Permissive Mode: Not Supported 00:31:09.624 NVM Sets: Not Supported 00:31:09.624 Read Recovery Levels: Not Supported 00:31:09.624 Endurance Groups: Not Supported 00:31:09.624 Predictable Latency Mode: Not Supported 00:31:09.624 Traffic Based Keep ALive: Not Supported 00:31:09.624 Namespace Granularity: Not Supported 00:31:09.624 SQ Associations: Not Supported 00:31:09.624 UUID List: Not Supported 00:31:09.624 Multi-Domain Subsystem: Not Supported 00:31:09.624 Fixed Capacity Management: Not Supported 00:31:09.624 Variable Capacity Management: Not Supported 00:31:09.624 Delete Endurance Group: Not Supported 00:31:09.624 Delete NVM Set: Not Supported 00:31:09.624 Extended LBA Formats Supported: Not Supported 00:31:09.624 Flexible Data Placement Supported: Not Supported 00:31:09.624 00:31:09.624 Controller Memory Buffer Support 00:31:09.624 ================================ 00:31:09.624 Supported: No 00:31:09.624 00:31:09.624 Persistent Memory Region Support 00:31:09.624 ================================ 00:31:09.624 Supported: No 00:31:09.624 00:31:09.624 Admin Command Set Attributes 00:31:09.624 ============================ 00:31:09.624 Security Send/Receive: Not Supported 00:31:09.624 Format NVM: Not Supported 00:31:09.624 Firmware Activate/Download: Not Supported 00:31:09.624 Namespace Management: Not Supported 00:31:09.624 Device Self-Test: Not Supported 00:31:09.624 Directives: Not Supported 00:31:09.624 NVMe-MI: Not Supported 00:31:09.624 Virtualization Management: Not Supported 00:31:09.624 Doorbell Buffer Config: Not Supported 00:31:09.624 Get LBA Status Capability: Not Supported 00:31:09.624 Command & Feature Lockdown Capability: Not Supported 00:31:09.624 Abort Command Limit: 1 00:31:09.624 Async Event Request Limit: 1 00:31:09.624 Number of Firmware Slots: N/A 00:31:09.624 Firmware Slot 1 Read-Only: N/A 00:31:09.624 Firmware Activation Without Reset: N/A 00:31:09.624 Multiple Update Detection Support: N/A 00:31:09.624 Firmware Update Granularity: No Information Provided 00:31:09.624 Per-Namespace SMART Log: No 00:31:09.624 Asymmetric Namespace Access Log Page: Not Supported 00:31:09.624 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:09.624 Command Effects Log Page: Not Supported 00:31:09.624 Get Log Page Extended Data: Supported 00:31:09.624 Telemetry Log Pages: Not Supported 00:31:09.624 Persistent Event Log Pages: Not Supported 00:31:09.624 Supported Log Pages Log Page: May Support 00:31:09.624 Commands Supported & Effects Log Page: Not Supported 00:31:09.624 Feature Identifiers & Effects Log Page:May Support 00:31:09.624 NVMe-MI Commands & Effects Log Page: May Support 00:31:09.624 Data Area 4 for Telemetry Log: Not Supported 00:31:09.624 Error Log Page Entries Supported: 1 00:31:09.624 Keep Alive: Not Supported 00:31:09.624 00:31:09.624 NVM Command Set Attributes 00:31:09.624 ========================== 00:31:09.624 Submission Queue Entry Size 00:31:09.624 Max: 1 00:31:09.624 Min: 1 00:31:09.624 Completion Queue Entry Size 00:31:09.624 Max: 1 00:31:09.624 Min: 1 00:31:09.624 Number of Namespaces: 0 00:31:09.624 Compare Command: Not Supported 00:31:09.624 Write Uncorrectable Command: Not Supported 00:31:09.624 Dataset Management Command: Not Supported 00:31:09.624 Write Zeroes Command: Not Supported 00:31:09.624 Set Features Save Field: Not Supported 00:31:09.624 Reservations: Not Supported 00:31:09.624 Timestamp: Not Supported 00:31:09.624 Copy: Not Supported 00:31:09.624 Volatile Write Cache: Not Present 00:31:09.624 Atomic Write Unit (Normal): 1 00:31:09.624 Atomic Write Unit (PFail): 1 00:31:09.624 Atomic Compare & Write Unit: 1 00:31:09.624 Fused Compare & Write: Not Supported 00:31:09.624 Scatter-Gather List 00:31:09.624 SGL Command Set: Supported 00:31:09.624 SGL Keyed: Not Supported 00:31:09.624 SGL Bit Bucket Descriptor: Not Supported 00:31:09.624 SGL Metadata Pointer: Not Supported 00:31:09.624 Oversized SGL: Not Supported 00:31:09.624 SGL Metadata Address: Not Supported 00:31:09.624 SGL Offset: Supported 00:31:09.624 Transport SGL Data Block: Not Supported 00:31:09.624 Replay Protected Memory Block: Not Supported 00:31:09.624 00:31:09.624 Firmware Slot Information 00:31:09.624 ========================= 00:31:09.624 Active slot: 0 00:31:09.624 00:31:09.624 00:31:09.624 Error Log 00:31:09.624 ========= 00:31:09.624 00:31:09.624 Active Namespaces 00:31:09.624 ================= 00:31:09.624 Discovery Log Page 00:31:09.624 ================== 00:31:09.624 Generation Counter: 2 00:31:09.624 Number of Records: 2 00:31:09.624 Record Format: 0 00:31:09.624 00:31:09.624 Discovery Log Entry 0 00:31:09.624 ---------------------- 00:31:09.624 Transport Type: 3 (TCP) 00:31:09.624 Address Family: 1 (IPv4) 00:31:09.624 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:09.624 Entry Flags: 00:31:09.624 Duplicate Returned Information: 0 00:31:09.624 Explicit Persistent Connection Support for Discovery: 0 00:31:09.624 Transport Requirements: 00:31:09.624 Secure Channel: Not Specified 00:31:09.624 Port ID: 1 (0x0001) 00:31:09.624 Controller ID: 65535 (0xffff) 00:31:09.624 Admin Max SQ Size: 32 00:31:09.624 Transport Service Identifier: 4420 00:31:09.624 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:09.624 Transport Address: 10.0.0.1 00:31:09.624 Discovery Log Entry 1 00:31:09.624 ---------------------- 00:31:09.624 Transport Type: 3 (TCP) 00:31:09.624 Address Family: 1 (IPv4) 00:31:09.624 Subsystem Type: 2 (NVM Subsystem) 00:31:09.624 Entry Flags: 00:31:09.624 Duplicate Returned Information: 0 00:31:09.624 Explicit Persistent Connection Support for Discovery: 0 00:31:09.624 Transport Requirements: 00:31:09.624 Secure Channel: Not Specified 00:31:09.624 Port ID: 1 (0x0001) 00:31:09.624 Controller ID: 65535 (0xffff) 00:31:09.624 Admin Max SQ Size: 32 00:31:09.624 Transport Service Identifier: 4420 00:31:09.624 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:09.624 Transport Address: 10.0.0.1 00:31:09.624 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:09.624 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.624 get_feature(0x01) failed 00:31:09.624 get_feature(0x02) failed 00:31:09.624 get_feature(0x04) failed 00:31:09.625 ===================================================== 00:31:09.625 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:09.625 ===================================================== 00:31:09.625 Controller Capabilities/Features 00:31:09.625 ================================ 00:31:09.625 Vendor ID: 0000 00:31:09.625 Subsystem Vendor ID: 0000 00:31:09.625 Serial Number: c36c8f94c61c08d3a317 00:31:09.625 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:09.625 Firmware Version: 6.7.0-68 00:31:09.625 Recommended Arb Burst: 6 00:31:09.625 IEEE OUI Identifier: 00 00 00 00:31:09.625 Multi-path I/O 00:31:09.625 May have multiple subsystem ports: Yes 00:31:09.625 May have multiple controllers: Yes 00:31:09.625 Associated with SR-IOV VF: No 00:31:09.625 Max Data Transfer Size: Unlimited 00:31:09.625 Max Number of Namespaces: 1024 00:31:09.625 Max Number of I/O Queues: 128 00:31:09.625 NVMe Specification Version (VS): 1.3 00:31:09.625 NVMe Specification Version (Identify): 1.3 00:31:09.625 Maximum Queue Entries: 1024 00:31:09.625 Contiguous Queues Required: No 00:31:09.625 Arbitration Mechanisms Supported 00:31:09.625 Weighted Round Robin: Not Supported 00:31:09.625 Vendor Specific: Not Supported 00:31:09.625 Reset Timeout: 7500 ms 00:31:09.625 Doorbell Stride: 4 bytes 00:31:09.625 NVM Subsystem Reset: Not Supported 00:31:09.625 Command Sets Supported 00:31:09.625 NVM Command Set: Supported 00:31:09.625 Boot Partition: Not Supported 00:31:09.625 Memory Page Size Minimum: 4096 bytes 00:31:09.625 Memory Page Size Maximum: 4096 bytes 00:31:09.625 Persistent Memory Region: Not Supported 00:31:09.625 Optional Asynchronous Events Supported 00:31:09.625 Namespace Attribute Notices: Supported 00:31:09.625 Firmware Activation Notices: Not Supported 00:31:09.625 ANA Change Notices: Supported 00:31:09.625 PLE Aggregate Log Change Notices: Not Supported 00:31:09.625 LBA Status Info Alert Notices: Not Supported 00:31:09.625 EGE Aggregate Log Change Notices: Not Supported 00:31:09.625 Normal NVM Subsystem Shutdown event: Not Supported 00:31:09.625 Zone Descriptor Change Notices: Not Supported 00:31:09.625 Discovery Log Change Notices: Not Supported 00:31:09.625 Controller Attributes 00:31:09.625 128-bit Host Identifier: Supported 00:31:09.625 Non-Operational Permissive Mode: Not Supported 00:31:09.625 NVM Sets: Not Supported 00:31:09.625 Read Recovery Levels: Not Supported 00:31:09.625 Endurance Groups: Not Supported 00:31:09.625 Predictable Latency Mode: Not Supported 00:31:09.625 Traffic Based Keep ALive: Supported 00:31:09.625 Namespace Granularity: Not Supported 00:31:09.625 SQ Associations: Not Supported 00:31:09.625 UUID List: Not Supported 00:31:09.625 Multi-Domain Subsystem: Not Supported 00:31:09.625 Fixed Capacity Management: Not Supported 00:31:09.625 Variable Capacity Management: Not Supported 00:31:09.625 Delete Endurance Group: Not Supported 00:31:09.625 Delete NVM Set: Not Supported 00:31:09.625 Extended LBA Formats Supported: Not Supported 00:31:09.625 Flexible Data Placement Supported: Not Supported 00:31:09.625 00:31:09.625 Controller Memory Buffer Support 00:31:09.625 ================================ 00:31:09.625 Supported: No 00:31:09.625 00:31:09.625 Persistent Memory Region Support 00:31:09.625 ================================ 00:31:09.625 Supported: No 00:31:09.625 00:31:09.625 Admin Command Set Attributes 00:31:09.625 ============================ 00:31:09.625 Security Send/Receive: Not Supported 00:31:09.625 Format NVM: Not Supported 00:31:09.625 Firmware Activate/Download: Not Supported 00:31:09.625 Namespace Management: Not Supported 00:31:09.625 Device Self-Test: Not Supported 00:31:09.625 Directives: Not Supported 00:31:09.625 NVMe-MI: Not Supported 00:31:09.625 Virtualization Management: Not Supported 00:31:09.625 Doorbell Buffer Config: Not Supported 00:31:09.625 Get LBA Status Capability: Not Supported 00:31:09.625 Command & Feature Lockdown Capability: Not Supported 00:31:09.625 Abort Command Limit: 4 00:31:09.625 Async Event Request Limit: 4 00:31:09.625 Number of Firmware Slots: N/A 00:31:09.625 Firmware Slot 1 Read-Only: N/A 00:31:09.625 Firmware Activation Without Reset: N/A 00:31:09.625 Multiple Update Detection Support: N/A 00:31:09.625 Firmware Update Granularity: No Information Provided 00:31:09.625 Per-Namespace SMART Log: Yes 00:31:09.625 Asymmetric Namespace Access Log Page: Supported 00:31:09.625 ANA Transition Time : 10 sec 00:31:09.625 00:31:09.625 Asymmetric Namespace Access Capabilities 00:31:09.625 ANA Optimized State : Supported 00:31:09.625 ANA Non-Optimized State : Supported 00:31:09.625 ANA Inaccessible State : Supported 00:31:09.625 ANA Persistent Loss State : Supported 00:31:09.625 ANA Change State : Supported 00:31:09.625 ANAGRPID is not changed : No 00:31:09.625 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:09.625 00:31:09.625 ANA Group Identifier Maximum : 128 00:31:09.625 Number of ANA Group Identifiers : 128 00:31:09.625 Max Number of Allowed Namespaces : 1024 00:31:09.625 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:09.625 Command Effects Log Page: Supported 00:31:09.625 Get Log Page Extended Data: Supported 00:31:09.625 Telemetry Log Pages: Not Supported 00:31:09.625 Persistent Event Log Pages: Not Supported 00:31:09.625 Supported Log Pages Log Page: May Support 00:31:09.625 Commands Supported & Effects Log Page: Not Supported 00:31:09.625 Feature Identifiers & Effects Log Page:May Support 00:31:09.625 NVMe-MI Commands & Effects Log Page: May Support 00:31:09.625 Data Area 4 for Telemetry Log: Not Supported 00:31:09.625 Error Log Page Entries Supported: 128 00:31:09.625 Keep Alive: Supported 00:31:09.625 Keep Alive Granularity: 1000 ms 00:31:09.625 00:31:09.625 NVM Command Set Attributes 00:31:09.625 ========================== 00:31:09.625 Submission Queue Entry Size 00:31:09.625 Max: 64 00:31:09.625 Min: 64 00:31:09.625 Completion Queue Entry Size 00:31:09.625 Max: 16 00:31:09.625 Min: 16 00:31:09.625 Number of Namespaces: 1024 00:31:09.625 Compare Command: Not Supported 00:31:09.625 Write Uncorrectable Command: Not Supported 00:31:09.625 Dataset Management Command: Supported 00:31:09.625 Write Zeroes Command: Supported 00:31:09.625 Set Features Save Field: Not Supported 00:31:09.626 Reservations: Not Supported 00:31:09.626 Timestamp: Not Supported 00:31:09.626 Copy: Not Supported 00:31:09.626 Volatile Write Cache: Present 00:31:09.626 Atomic Write Unit (Normal): 1 00:31:09.626 Atomic Write Unit (PFail): 1 00:31:09.626 Atomic Compare & Write Unit: 1 00:31:09.626 Fused Compare & Write: Not Supported 00:31:09.626 Scatter-Gather List 00:31:09.626 SGL Command Set: Supported 00:31:09.626 SGL Keyed: Not Supported 00:31:09.626 SGL Bit Bucket Descriptor: Not Supported 00:31:09.626 SGL Metadata Pointer: Not Supported 00:31:09.626 Oversized SGL: Not Supported 00:31:09.626 SGL Metadata Address: Not Supported 00:31:09.626 SGL Offset: Supported 00:31:09.626 Transport SGL Data Block: Not Supported 00:31:09.626 Replay Protected Memory Block: Not Supported 00:31:09.626 00:31:09.626 Firmware Slot Information 00:31:09.626 ========================= 00:31:09.626 Active slot: 0 00:31:09.626 00:31:09.626 Asymmetric Namespace Access 00:31:09.626 =========================== 00:31:09.626 Change Count : 0 00:31:09.626 Number of ANA Group Descriptors : 1 00:31:09.626 ANA Group Descriptor : 0 00:31:09.626 ANA Group ID : 1 00:31:09.626 Number of NSID Values : 1 00:31:09.626 Change Count : 0 00:31:09.626 ANA State : 1 00:31:09.626 Namespace Identifier : 1 00:31:09.626 00:31:09.626 Commands Supported and Effects 00:31:09.626 ============================== 00:31:09.626 Admin Commands 00:31:09.626 -------------- 00:31:09.626 Get Log Page (02h): Supported 00:31:09.626 Identify (06h): Supported 00:31:09.626 Abort (08h): Supported 00:31:09.626 Set Features (09h): Supported 00:31:09.626 Get Features (0Ah): Supported 00:31:09.626 Asynchronous Event Request (0Ch): Supported 00:31:09.626 Keep Alive (18h): Supported 00:31:09.626 I/O Commands 00:31:09.626 ------------ 00:31:09.626 Flush (00h): Supported 00:31:09.626 Write (01h): Supported LBA-Change 00:31:09.626 Read (02h): Supported 00:31:09.626 Write Zeroes (08h): Supported LBA-Change 00:31:09.626 Dataset Management (09h): Supported 00:31:09.626 00:31:09.626 Error Log 00:31:09.626 ========= 00:31:09.626 Entry: 0 00:31:09.626 Error Count: 0x3 00:31:09.626 Submission Queue Id: 0x0 00:31:09.626 Command Id: 0x5 00:31:09.626 Phase Bit: 0 00:31:09.626 Status Code: 0x2 00:31:09.626 Status Code Type: 0x0 00:31:09.626 Do Not Retry: 1 00:31:09.626 Error Location: 0x28 00:31:09.626 LBA: 0x0 00:31:09.626 Namespace: 0x0 00:31:09.626 Vendor Log Page: 0x0 00:31:09.626 ----------- 00:31:09.626 Entry: 1 00:31:09.626 Error Count: 0x2 00:31:09.626 Submission Queue Id: 0x0 00:31:09.626 Command Id: 0x5 00:31:09.626 Phase Bit: 0 00:31:09.626 Status Code: 0x2 00:31:09.626 Status Code Type: 0x0 00:31:09.626 Do Not Retry: 1 00:31:09.626 Error Location: 0x28 00:31:09.626 LBA: 0x0 00:31:09.626 Namespace: 0x0 00:31:09.626 Vendor Log Page: 0x0 00:31:09.626 ----------- 00:31:09.626 Entry: 2 00:31:09.626 Error Count: 0x1 00:31:09.626 Submission Queue Id: 0x0 00:31:09.626 Command Id: 0x4 00:31:09.626 Phase Bit: 0 00:31:09.626 Status Code: 0x2 00:31:09.626 Status Code Type: 0x0 00:31:09.626 Do Not Retry: 1 00:31:09.626 Error Location: 0x28 00:31:09.626 LBA: 0x0 00:31:09.626 Namespace: 0x0 00:31:09.626 Vendor Log Page: 0x0 00:31:09.626 00:31:09.626 Number of Queues 00:31:09.626 ================ 00:31:09.626 Number of I/O Submission Queues: 128 00:31:09.626 Number of I/O Completion Queues: 128 00:31:09.626 00:31:09.626 ZNS Specific Controller Data 00:31:09.626 ============================ 00:31:09.626 Zone Append Size Limit: 0 00:31:09.626 00:31:09.626 00:31:09.626 Active Namespaces 00:31:09.626 ================= 00:31:09.626 get_feature(0x05) failed 00:31:09.626 Namespace ID:1 00:31:09.626 Command Set Identifier: NVM (00h) 00:31:09.626 Deallocate: Supported 00:31:09.626 Deallocated/Unwritten Error: Not Supported 00:31:09.626 Deallocated Read Value: Unknown 00:31:09.626 Deallocate in Write Zeroes: Not Supported 00:31:09.626 Deallocated Guard Field: 0xFFFF 00:31:09.626 Flush: Supported 00:31:09.626 Reservation: Not Supported 00:31:09.626 Namespace Sharing Capabilities: Multiple Controllers 00:31:09.626 Size (in LBAs): 1953525168 (931GiB) 00:31:09.626 Capacity (in LBAs): 1953525168 (931GiB) 00:31:09.626 Utilization (in LBAs): 1953525168 (931GiB) 00:31:09.626 UUID: f357b4f2-f342-4ed5-8001-227375cf2507 00:31:09.626 Thin Provisioning: Not Supported 00:31:09.626 Per-NS Atomic Units: Yes 00:31:09.626 Atomic Boundary Size (Normal): 0 00:31:09.626 Atomic Boundary Size (PFail): 0 00:31:09.626 Atomic Boundary Offset: 0 00:31:09.626 NGUID/EUI64 Never Reused: No 00:31:09.626 ANA group ID: 1 00:31:09.626 Namespace Write Protected: No 00:31:09.626 Number of LBA Formats: 1 00:31:09.626 Current LBA Format: LBA Format #00 00:31:09.626 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:09.626 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:09.626 rmmod nvme_tcp 00:31:09.626 rmmod nvme_fabrics 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.626 23:37:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.163 23:37:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:12.163 23:37:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:12.163 23:37:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:12.163 23:37:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:12.163 23:37:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:12.163 23:37:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:12.163 23:37:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:12.163 23:37:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:12.163 23:37:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:12.163 23:37:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:12.163 23:37:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:12.730 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:12.730 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:12.730 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:12.730 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:12.989 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:12.989 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:12.989 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:12.989 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:12.989 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:12.989 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:12.989 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:12.989 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:12.989 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:12.989 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:12.989 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:12.989 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:13.927 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:13.927 00:31:13.927 real 0m9.219s 00:31:13.927 user 0m1.880s 00:31:13.927 sys 0m3.285s 00:31:13.927 23:37:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:13.927 23:37:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.927 ************************************ 00:31:13.927 END TEST nvmf_identify_kernel_target 00:31:13.927 ************************************ 00:31:13.927 23:37:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:13.927 23:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:13.927 23:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:13.927 23:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.186 ************************************ 00:31:14.186 START TEST nvmf_auth_host 00:31:14.186 ************************************ 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:14.186 * Looking for test storage... 00:31:14.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.186 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.187 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.187 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:14.187 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:14.187 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:14.187 23:37:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:16.088 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:16.088 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:16.088 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:16.088 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.088 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:16.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:31:16.089 00:31:16.089 --- 10.0.0.2 ping statistics --- 00:31:16.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.089 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:31:16.089 00:31:16.089 --- 10.0.0.1 ping statistics --- 00:31:16.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.089 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1517420 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1517420 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1517420 ']' 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:16.089 23:37:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cdf74a8b6983de8e06de193aee672358 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.BNt 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cdf74a8b6983de8e06de193aee672358 0 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cdf74a8b6983de8e06de193aee672358 0 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cdf74a8b6983de8e06de193aee672358 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:16.346 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:16.603 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.BNt 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.BNt 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.BNt 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e21415ba10298b94c1b676d139cbfd83706266fa510eb2c6837e015734baebda 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.m2N 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e21415ba10298b94c1b676d139cbfd83706266fa510eb2c6837e015734baebda 3 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e21415ba10298b94c1b676d139cbfd83706266fa510eb2c6837e015734baebda 3 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e21415ba10298b94c1b676d139cbfd83706266fa510eb2c6837e015734baebda 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.m2N 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.m2N 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.m2N 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=00d9be023454d327e62931322f3bc96b186c78aa169ddaa0 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.YOK 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 00d9be023454d327e62931322f3bc96b186c78aa169ddaa0 0 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 00d9be023454d327e62931322f3bc96b186c78aa169ddaa0 0 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=00d9be023454d327e62931322f3bc96b186c78aa169ddaa0 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.YOK 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.YOK 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.YOK 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eaffe547c744d7b5649c2f0c935957c6183fbf92096157de 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.hkU 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eaffe547c744d7b5649c2f0c935957c6183fbf92096157de 2 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eaffe547c744d7b5649c2f0c935957c6183fbf92096157de 2 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eaffe547c744d7b5649c2f0c935957c6183fbf92096157de 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.hkU 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.hkU 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.hkU 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4fe8cee1f0d37454746763e3aaebca87 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.4rh 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4fe8cee1f0d37454746763e3aaebca87 1 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4fe8cee1f0d37454746763e3aaebca87 1 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4fe8cee1f0d37454746763e3aaebca87 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.4rh 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.4rh 00:31:16.604 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.4rh 00:31:16.893 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:16.893 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:16.893 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:16.893 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:16.893 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:16.893 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:16.893 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:16.893 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=964c4aff51a44c74c5cf9cc589ad28af 00:31:16.893 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:16.893 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.kk2 00:31:16.893 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 964c4aff51a44c74c5cf9cc589ad28af 1 00:31:16.893 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 964c4aff51a44c74c5cf9cc589ad28af 1 00:31:16.893 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=964c4aff51a44c74c5cf9cc589ad28af 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.kk2 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.kk2 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.kk2 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c2d125324c89a1a45ed692a5f41fbebf9bb5ea5a45f5f2ef 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.GsN 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c2d125324c89a1a45ed692a5f41fbebf9bb5ea5a45f5f2ef 2 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c2d125324c89a1a45ed692a5f41fbebf9bb5ea5a45f5f2ef 2 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c2d125324c89a1a45ed692a5f41fbebf9bb5ea5a45f5f2ef 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.GsN 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.GsN 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.GsN 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7460b129961e8bca42b9d3f08081fe4f 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Isi 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7460b129961e8bca42b9d3f08081fe4f 0 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7460b129961e8bca42b9d3f08081fe4f 0 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7460b129961e8bca42b9d3f08081fe4f 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Isi 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Isi 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Isi 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8c3726e43c812cf441f541243ff50344a240001fd48ffb0747b1d26014bda3cc 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.02O 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8c3726e43c812cf441f541243ff50344a240001fd48ffb0747b1d26014bda3cc 3 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8c3726e43c812cf441f541243ff50344a240001fd48ffb0747b1d26014bda3cc 3 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8c3726e43c812cf441f541243ff50344a240001fd48ffb0747b1d26014bda3cc 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.02O 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.02O 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.02O 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1517420 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1517420 ']' 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:16.894 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.BNt 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.m2N ]] 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.m2N 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.YOK 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.hkU ]] 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hkU 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.4rh 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.kk2 ]] 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kk2 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.GsN 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.177 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Isi ]] 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Isi 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.02O 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:17.436 23:37:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:18.370 Waiting for block devices as requested 00:31:18.370 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:18.370 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:18.630 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:18.630 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:18.630 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:18.889 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:18.889 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:18.889 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:18.889 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:19.149 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:19.149 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:19.149 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:19.149 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:19.408 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:19.408 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:19.408 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:19.408 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:19.975 No valid GPT data, bailing 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:19.975 00:31:19.975 Discovery Log Number of Records 2, Generation counter 2 00:31:19.975 =====Discovery Log Entry 0====== 00:31:19.975 trtype: tcp 00:31:19.975 adrfam: ipv4 00:31:19.975 subtype: current discovery subsystem 00:31:19.975 treq: not specified, sq flow control disable supported 00:31:19.975 portid: 1 00:31:19.975 trsvcid: 4420 00:31:19.975 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:19.975 traddr: 10.0.0.1 00:31:19.975 eflags: none 00:31:19.975 sectype: none 00:31:19.975 =====Discovery Log Entry 1====== 00:31:19.975 trtype: tcp 00:31:19.975 adrfam: ipv4 00:31:19.975 subtype: nvme subsystem 00:31:19.975 treq: not specified, sq flow control disable supported 00:31:19.975 portid: 1 00:31:19.975 trsvcid: 4420 00:31:19.975 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:19.975 traddr: 10.0.0.1 00:31:19.975 eflags: none 00:31:19.975 sectype: none 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:19.975 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: ]] 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.976 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.234 nvme0n1 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:20.234 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: ]] 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.235 23:37:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.493 nvme0n1 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:20.493 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: ]] 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.494 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.754 nvme0n1 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:20.754 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: ]] 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.755 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.014 nvme0n1 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: ]] 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.014 nvme0n1 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.014 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.274 nvme0n1 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.274 23:37:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: ]] 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.534 nvme0n1 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.534 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: ]] 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.793 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.793 nvme0n1 00:31:21.794 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.794 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.794 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.794 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.794 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.794 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.794 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.794 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.794 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.794 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.794 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: ]] 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:22.053 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.054 nvme0n1 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:22.054 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:22.312 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.312 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:22.312 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:22.312 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: ]] 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.313 nvme0n1 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.313 23:37:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.313 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.313 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.313 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.313 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:22.572 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.573 nvme0n1 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.573 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: ]] 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.833 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.096 nvme0n1 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: ]] 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:23.096 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.097 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.356 nvme0n1 00:31:23.356 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.356 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.356 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.356 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.356 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:23.356 23:37:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: ]] 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.356 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.614 nvme0n1 00:31:23.614 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.614 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.614 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:23.614 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.614 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.614 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: ]] 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.873 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.131 nvme0n1 00:31:24.131 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.131 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.131 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:24.131 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.131 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.131 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.131 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.131 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.131 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.131 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.131 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.131 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:24.131 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:24.131 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.131 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.132 23:37:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.392 nvme0n1 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: ]] 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.392 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.963 nvme0n1 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: ]] 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.963 23:37:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.530 nvme0n1 00:31:25.530 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.530 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:25.530 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.530 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.530 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:25.530 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.530 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:25.530 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:25.530 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.530 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: ]] 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:25.788 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:25.789 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:25.789 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.789 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.352 nvme0n1 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:26.352 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: ]] 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.353 23:37:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.922 nvme0n1 00:31:26.922 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.922 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.922 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.922 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.922 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.922 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.922 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.922 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.922 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.922 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.922 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.922 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.922 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:26.922 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.922 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:26.922 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:26.922 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.923 23:37:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.492 nvme0n1 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: ]] 00:31:27.492 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.493 23:37:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.432 nvme0n1 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: ]] 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.432 23:37:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.367 nvme0n1 00:31:29.368 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.626 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:29.626 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.626 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.626 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:29.626 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.626 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.626 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:29.626 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.626 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.626 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: ]] 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.627 23:37:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.562 nvme0n1 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: ]] 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.562 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:30.563 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:30.563 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:30.563 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.563 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.563 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:30.563 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:30.563 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:30.563 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:30.563 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:30.563 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:30.563 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.563 23:37:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.502 nvme0n1 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.502 23:37:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.474 nvme0n1 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: ]] 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:32.474 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.475 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.475 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:32.475 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.475 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:32.475 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:32.475 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:32.475 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:32.475 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.475 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.732 nvme0n1 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: ]] 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.732 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.733 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.733 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:32.733 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:32.733 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:32.733 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.733 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.733 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:32.733 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.733 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:32.733 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:32.733 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:32.733 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:32.733 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.733 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.990 nvme0n1 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: ]] 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.990 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.248 nvme0n1 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: ]] 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:33.248 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:33.249 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:33.249 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.249 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.506 nvme0n1 00:31:33.506 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.506 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.506 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.506 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.506 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.506 23:37:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.506 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.506 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.506 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.506 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.506 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.506 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.506 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:33.506 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.506 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:33.506 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:33.506 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:33.506 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.507 nvme0n1 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.507 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: ]] 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:33.765 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.766 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:33.766 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:33.766 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:33.766 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:33.766 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.766 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.766 nvme0n1 00:31:33.766 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.766 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.766 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.766 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.766 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.766 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: ]] 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.024 nvme0n1 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.024 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.282 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.282 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.282 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.282 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.282 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.282 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.282 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:34.282 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.282 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:34.282 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:34.282 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:34.282 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: ]] 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.283 23:37:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.283 nvme0n1 00:31:34.283 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.283 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.283 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.283 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.283 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: ]] 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:34.541 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:34.542 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:34.542 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.542 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.542 nvme0n1 00:31:34.542 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.542 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.542 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.542 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.542 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.801 nvme0n1 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.801 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: ]] 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:35.059 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:35.060 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:35.060 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.060 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.318 nvme0n1 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: ]] 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.318 23:37:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.577 nvme0n1 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: ]] 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:35.577 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:35.834 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:35.834 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.834 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.090 nvme0n1 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: ]] 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.090 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.349 nvme0n1 00:31:36.349 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.349 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.349 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.349 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.349 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.349 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.349 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.349 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.349 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.349 23:37:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.349 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.607 nvme0n1 00:31:36.607 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.607 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.607 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.607 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.607 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.607 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: ]] 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.865 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.431 nvme0n1 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: ]] 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:37.431 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:37.432 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:37.432 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.432 23:37:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.996 nvme0n1 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: ]] 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.996 23:37:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.561 nvme0n1 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: ]] 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.561 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.126 nvme0n1 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.126 23:37:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.690 nvme0n1 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: ]] 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.690 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:39.691 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.691 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.691 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.691 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.691 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:39.691 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.691 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.691 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.691 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.691 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.691 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.691 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.691 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.691 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.691 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:39.691 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.691 23:37:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.623 nvme0n1 00:31:40.623 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.623 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.623 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.623 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.623 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.623 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: ]] 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:40.880 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:40.881 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:40.881 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.881 23:37:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.814 nvme0n1 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: ]] 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.815 23:37:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.748 nvme0n1 00:31:42.748 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.748 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.748 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.748 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.748 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.748 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.748 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.748 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.748 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.748 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.748 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: ]] 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.749 23:37:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.121 nvme0n1 00:31:44.121 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.121 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.121 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.121 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.121 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.121 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.121 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.121 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.121 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.121 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.121 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.121 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.122 23:37:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.056 nvme0n1 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: ]] 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.056 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.057 nvme0n1 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: ]] 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.057 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.315 nvme0n1 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: ]] 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.315 23:37:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.315 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.315 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.315 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.315 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.315 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.315 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.315 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.315 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.315 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.315 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.315 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:45.315 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.315 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.574 nvme0n1 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: ]] 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.574 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.832 nvme0n1 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.832 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.090 nvme0n1 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: ]] 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.090 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.347 nvme0n1 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: ]] 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:46.347 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.348 23:37:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.606 nvme0n1 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: ]] 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.606 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.864 nvme0n1 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: ]] 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.864 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.122 nvme0n1 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.123 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.381 nvme0n1 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: ]] 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:47.381 23:37:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.381 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.639 nvme0n1 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: ]] 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:47.639 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.640 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.897 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.897 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.897 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.897 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.897 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.897 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.897 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.897 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.897 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.897 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.897 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.897 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.897 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:47.897 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.897 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.155 nvme0n1 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: ]] 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.155 23:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.413 nvme0n1 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: ]] 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.413 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.671 nvme0n1 00:31:48.671 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.671 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.671 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.671 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.671 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.671 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.929 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.187 nvme0n1 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: ]] 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:49.187 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.188 23:37:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.806 nvme0n1 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:49.806 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: ]] 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.807 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.371 nvme0n1 00:31:50.371 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.371 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.371 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.371 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.371 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.371 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.371 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: ]] 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.372 23:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.937 nvme0n1 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: ]] 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.937 23:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.503 nvme0n1 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.503 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:51.504 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:51.504 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:51.504 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:51.504 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.504 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.069 nvme0n1 00:31:52.069 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.069 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.069 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.069 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.069 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.069 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.069 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.069 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.069 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.069 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2RmNzRhOGI2OTgzZGU4ZTA2ZGUxOTNhZWU2NzIzNTgbVTZ0: 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: ]] 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTIxNDE1YmExMDI5OGI5NGMxYjY3NmQxMzljYmZkODM3MDYyNjZmYTUxMGViMmM2ODM3ZTAxNTczNGJhZWJkYZlxHEY=: 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.327 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.328 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.328 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.328 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.328 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.328 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.328 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.328 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.328 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.328 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:52.328 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.328 23:37:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.262 nvme0n1 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: ]] 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.262 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.263 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.263 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.263 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.263 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.263 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.263 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.263 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.263 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.263 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.263 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.263 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:53.263 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.263 23:37:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.197 nvme0n1 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZlOGNlZTFmMGQzNzQ1NDc0Njc2M2UzYWFlYmNhODcAnf6s: 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: ]] 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTY0YzRhZmY1MWE0NGM3NGM1Y2Y5Y2M1ODlhZDI4YWZPBJbE: 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.197 23:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.570 nvme0n1 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzJkMTI1MzI0Yzg5YTFhNDVlZDY5MmE1ZjQxZmJlYmY5YmI1ZWE1YTQ1ZjVmMmVmReIbrg==: 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: ]] 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQ2MGIxMjk5NjFlOGJjYTQyYjlkM2YwODA4MWZlNGadNBb9: 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:55.570 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.571 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.571 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.571 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.571 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.571 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.571 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.571 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.571 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.571 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.571 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.571 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.571 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.571 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.571 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:55.571 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.571 23:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.505 nvme0n1 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGMzNzI2ZTQzYzgxMmNmNDQxZjU0MTI0M2ZmNTAzNDRhMjQwMDAxZmQ0OGZmYjA3NDdiMWQyNjAxNGJkYTNjY+f4Y5g=: 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.505 23:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.505 23:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.505 23:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.505 23:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.505 23:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.505 23:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.505 23:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.505 23:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.505 23:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.505 23:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.505 23:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.505 23:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.505 23:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.505 23:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:56.505 23:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.505 23:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.437 nvme0n1 00:31:57.437 23:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDBkOWJlMDIzNDU0ZDMyN2U2MjkzMTMyMmYzYmM5NmIxODZjNzhhYTE2OWRkYWEwKTmaDw==: 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: ]] 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWFmZmU1NDdjNzQ0ZDdiNTY0OWMyZjBjOTM1OTU3YzYxODNmYmY5MjA5NjE1N2RleCFnpg==: 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.437 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.437 request: 00:31:57.437 { 00:31:57.437 "name": "nvme0", 00:31:57.437 "trtype": "tcp", 00:31:57.437 "traddr": "10.0.0.1", 00:31:57.437 "adrfam": "ipv4", 00:31:57.438 "trsvcid": "4420", 00:31:57.438 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:57.438 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:57.438 "prchk_reftag": false, 00:31:57.438 "prchk_guard": false, 00:31:57.438 "hdgst": false, 00:31:57.438 "ddgst": false, 00:31:57.438 "method": "bdev_nvme_attach_controller", 00:31:57.438 "req_id": 1 00:31:57.438 } 00:31:57.438 Got JSON-RPC error response 00:31:57.438 response: 00:31:57.438 { 00:31:57.438 "code": -5, 00:31:57.438 "message": "Input/output error" 00:31:57.438 } 00:31:57.438 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:57.438 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:57.438 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:57.438 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:57.438 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:57.438 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.438 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:31:57.438 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.438 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.438 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.696 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:31:57.696 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:31:57.696 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.696 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.696 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.696 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.696 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.696 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.696 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.696 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.696 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.696 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.696 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:57.696 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:57.696 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.697 request: 00:31:57.697 { 00:31:57.697 "name": "nvme0", 00:31:57.697 "trtype": "tcp", 00:31:57.697 "traddr": "10.0.0.1", 00:31:57.697 "adrfam": "ipv4", 00:31:57.697 "trsvcid": "4420", 00:31:57.697 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:57.697 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:57.697 "prchk_reftag": false, 00:31:57.697 "prchk_guard": false, 00:31:57.697 "hdgst": false, 00:31:57.697 "ddgst": false, 00:31:57.697 "dhchap_key": "key2", 00:31:57.697 "method": "bdev_nvme_attach_controller", 00:31:57.697 "req_id": 1 00:31:57.697 } 00:31:57.697 Got JSON-RPC error response 00:31:57.697 response: 00:31:57.697 { 00:31:57.697 "code": -5, 00:31:57.697 "message": "Input/output error" 00:31:57.697 } 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.697 request: 00:31:57.697 { 00:31:57.697 "name": "nvme0", 00:31:57.697 "trtype": "tcp", 00:31:57.697 "traddr": "10.0.0.1", 00:31:57.697 "adrfam": "ipv4", 00:31:57.697 "trsvcid": "4420", 00:31:57.697 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:57.697 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:57.697 "prchk_reftag": false, 00:31:57.697 "prchk_guard": false, 00:31:57.697 "hdgst": false, 00:31:57.697 "ddgst": false, 00:31:57.697 "dhchap_key": "key1", 00:31:57.697 "dhchap_ctrlr_key": "ckey2", 00:31:57.697 "method": "bdev_nvme_attach_controller", 00:31:57.697 "req_id": 1 00:31:57.697 } 00:31:57.697 Got JSON-RPC error response 00:31:57.697 response: 00:31:57.697 { 00:31:57.697 "code": -5, 00:31:57.697 "message": "Input/output error" 00:31:57.697 } 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:57.697 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:57.697 rmmod nvme_tcp 00:31:57.956 rmmod nvme_fabrics 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1517420 ']' 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1517420 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1517420 ']' 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1517420 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1517420 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1517420' 00:31:57.956 killing process with pid 1517420 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1517420 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1517420 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.956 23:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.488 23:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:00.488 23:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:00.488 23:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:00.488 23:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:00.488 23:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:00.488 23:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:00.488 23:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:00.488 23:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:00.488 23:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:00.488 23:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:00.488 23:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:00.488 23:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:00.488 23:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:01.423 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:01.423 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:01.423 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:01.423 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:01.423 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:01.423 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:01.423 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:01.423 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:01.423 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:01.423 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:01.423 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:01.423 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:01.423 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:01.423 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:01.423 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:01.423 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:02.357 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:02.357 23:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.BNt /tmp/spdk.key-null.YOK /tmp/spdk.key-sha256.4rh /tmp/spdk.key-sha384.GsN /tmp/spdk.key-sha512.02O /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:02.357 23:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:03.732 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:03.732 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:03.732 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:03.732 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:03.732 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:03.732 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:03.732 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:03.732 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:03.732 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:03.732 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:03.732 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:03.732 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:03.732 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:03.732 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:03.732 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:03.732 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:03.732 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:03.732 00:32:03.732 real 0m49.748s 00:32:03.732 user 0m47.769s 00:32:03.732 sys 0m5.504s 00:32:03.732 23:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:03.732 23:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.732 ************************************ 00:32:03.732 END TEST nvmf_auth_host 00:32:03.732 ************************************ 00:32:03.732 23:38:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:32:03.732 23:38:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:03.732 23:38:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:03.732 23:38:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:03.732 23:38:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.991 ************************************ 00:32:03.991 START TEST nvmf_digest 00:32:03.991 ************************************ 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:03.991 * Looking for test storage... 00:32:03.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:03.991 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:32:03.992 23:38:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:05.891 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:05.891 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:05.891 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:05.891 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:05.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:05.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:32:05.891 00:32:05.891 --- 10.0.0.2 ping statistics --- 00:32:05.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.891 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:05.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:05.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:32:05.891 00:32:05.891 --- 10.0.0.1 ping statistics --- 00:32:05.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.891 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:05.891 ************************************ 00:32:05.891 START TEST nvmf_digest_clean 00:32:05.891 ************************************ 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1526986 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1526986 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1526986 ']' 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:05.891 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:05.891 [2024-07-25 23:38:03.536773] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:05.891 [2024-07-25 23:38:03.536859] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:05.891 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.891 [2024-07-25 23:38:03.578922] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:05.891 [2024-07-25 23:38:03.610458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.149 [2024-07-25 23:38:03.703387] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.149 [2024-07-25 23:38:03.703454] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.149 [2024-07-25 23:38:03.703471] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.149 [2024-07-25 23:38:03.703486] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.149 [2024-07-25 23:38:03.703499] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.149 [2024-07-25 23:38:03.703528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.149 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:06.149 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:06.149 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:06.149 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:06.149 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:06.149 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.149 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:06.149 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:06.149 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:06.149 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.149 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:06.407 null0 00:32:06.407 [2024-07-25 23:38:03.923233] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.407 [2024-07-25 23:38:03.947479] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.407 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.407 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:06.407 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:06.407 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:06.407 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:06.407 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:06.407 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:06.407 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:06.407 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1527022 00:32:06.407 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1527022 /var/tmp/bperf.sock 00:32:06.407 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1527022 ']' 00:32:06.407 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:06.407 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:06.407 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:06.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:06.407 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:06.407 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:06.407 23:38:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:06.407 [2024-07-25 23:38:03.998583] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:06.407 [2024-07-25 23:38:03.998654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527022 ] 00:32:06.407 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.407 [2024-07-25 23:38:04.029933] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:06.407 [2024-07-25 23:38:04.057073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.664 [2024-07-25 23:38:04.142552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.664 23:38:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:06.664 23:38:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:06.664 23:38:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:06.664 23:38:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:06.664 23:38:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:06.921 23:38:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:06.921 23:38:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:07.486 nvme0n1 00:32:07.486 23:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:07.486 23:38:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:07.486 Running I/O for 2 seconds... 00:32:10.013 00:32:10.013 Latency(us) 00:32:10.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.013 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:10.013 nvme0n1 : 2.01 18328.62 71.60 0.00 0.00 6972.08 3301.07 18641.35 00:32:10.013 =================================================================================================================== 00:32:10.013 Total : 18328.62 71.60 0.00 0.00 6972.08 3301.07 18641.35 00:32:10.013 0 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:10.013 | select(.opcode=="crc32c") 00:32:10.013 | "\(.module_name) \(.executed)"' 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1527022 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1527022 ']' 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1527022 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1527022 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:10.013 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1527022' 00:32:10.013 killing process with pid 1527022 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1527022 00:32:10.014 Received shutdown signal, test time was about 2.000000 seconds 00:32:10.014 00:32:10.014 Latency(us) 00:32:10.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.014 =================================================================================================================== 00:32:10.014 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1527022 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1527422 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1527422 /var/tmp/bperf.sock 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1527422 ']' 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:10.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:10.014 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:10.014 [2024-07-25 23:38:07.702384] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:10.014 [2024-07-25 23:38:07.702473] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527422 ] 00:32:10.014 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:10.014 Zero copy mechanism will not be used. 00:32:10.014 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.014 [2024-07-25 23:38:07.733524] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:10.307 [2024-07-25 23:38:07.764549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.307 [2024-07-25 23:38:07.854596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.307 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:10.307 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:10.307 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:10.307 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:10.307 23:38:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:10.565 23:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:10.565 23:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:11.130 nvme0n1 00:32:11.130 23:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:11.130 23:38:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:11.130 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:11.130 Zero copy mechanism will not be used. 00:32:11.130 Running I/O for 2 seconds... 00:32:13.659 00:32:13.659 Latency(us) 00:32:13.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.659 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:13.659 nvme0n1 : 2.00 4323.56 540.44 0.00 0.00 3696.16 861.68 5364.24 00:32:13.659 =================================================================================================================== 00:32:13.659 Total : 4323.56 540.44 0.00 0.00 3696.16 861.68 5364.24 00:32:13.659 0 00:32:13.659 23:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:13.659 23:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:13.659 23:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:13.659 23:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:13.659 23:38:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:13.659 | select(.opcode=="crc32c") 00:32:13.659 | "\(.module_name) \(.executed)"' 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1527422 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1527422 ']' 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1527422 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1527422 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1527422' 00:32:13.659 killing process with pid 1527422 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1527422 00:32:13.659 Received shutdown signal, test time was about 2.000000 seconds 00:32:13.659 00:32:13.659 Latency(us) 00:32:13.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.659 =================================================================================================================== 00:32:13.659 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1527422 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1527834 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1527834 /var/tmp/bperf.sock 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1527834 ']' 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:13.659 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:13.660 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:13.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:13.660 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:13.660 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:13.660 [2024-07-25 23:38:11.322623] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:13.660 [2024-07-25 23:38:11.322720] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527834 ] 00:32:13.660 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.660 [2024-07-25 23:38:11.355616] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:13.918 [2024-07-25 23:38:11.388588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.918 [2024-07-25 23:38:11.480311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.918 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:13.918 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:13.918 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:13.918 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:13.918 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:14.177 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:14.177 23:38:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:14.743 nvme0n1 00:32:14.743 23:38:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:14.743 23:38:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:14.743 Running I/O for 2 seconds... 00:32:16.640 00:32:16.640 Latency(us) 00:32:16.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.640 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:16.640 nvme0n1 : 2.01 19444.18 75.95 0.00 0.00 6572.30 2985.53 10679.94 00:32:16.640 =================================================================================================================== 00:32:16.640 Total : 19444.18 75.95 0.00 0.00 6572.30 2985.53 10679.94 00:32:16.640 0 00:32:16.640 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:16.640 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:16.640 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:16.640 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:16.640 | select(.opcode=="crc32c") 00:32:16.640 | "\(.module_name) \(.executed)"' 00:32:16.640 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:16.897 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:16.897 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:16.897 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:16.897 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:16.897 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1527834 00:32:16.897 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1527834 ']' 00:32:16.897 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1527834 00:32:16.897 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:16.897 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:16.897 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1527834 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1527834' 00:32:17.155 killing process with pid 1527834 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1527834 00:32:17.155 Received shutdown signal, test time was about 2.000000 seconds 00:32:17.155 00:32:17.155 Latency(us) 00:32:17.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.155 =================================================================================================================== 00:32:17.155 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1527834 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1528347 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1528347 /var/tmp/bperf.sock 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1528347 ']' 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:17.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:17.155 23:38:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:17.412 [2024-07-25 23:38:14.895673] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:17.412 [2024-07-25 23:38:14.895752] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528347 ] 00:32:17.412 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:17.412 Zero copy mechanism will not be used. 00:32:17.412 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.412 [2024-07-25 23:38:14.928413] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:17.412 [2024-07-25 23:38:14.954924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.412 [2024-07-25 23:38:15.040555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.412 23:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:17.412 23:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:17.412 23:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:17.412 23:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:17.412 23:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:17.977 23:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:17.977 23:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:18.234 nvme0n1 00:32:18.234 23:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:18.235 23:38:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:18.235 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:18.235 Zero copy mechanism will not be used. 00:32:18.235 Running I/O for 2 seconds... 00:32:20.767 00:32:20.767 Latency(us) 00:32:20.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.767 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:20.767 nvme0n1 : 2.00 3582.83 447.85 0.00 0.00 4455.87 3568.07 11116.85 00:32:20.767 =================================================================================================================== 00:32:20.767 Total : 3582.83 447.85 0.00 0.00 4455.87 3568.07 11116.85 00:32:20.767 0 00:32:20.767 23:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:20.767 23:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:20.767 23:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:20.767 23:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:20.767 | select(.opcode=="crc32c") 00:32:20.767 | "\(.module_name) \(.executed)"' 00:32:20.767 23:38:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1528347 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1528347 ']' 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1528347 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1528347 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1528347' 00:32:20.767 killing process with pid 1528347 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1528347 00:32:20.767 Received shutdown signal, test time was about 2.000000 seconds 00:32:20.767 00:32:20.767 Latency(us) 00:32:20.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.767 =================================================================================================================== 00:32:20.767 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1528347 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1526986 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1526986 ']' 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1526986 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1526986 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1526986' 00:32:20.767 killing process with pid 1526986 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1526986 00:32:20.767 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1526986 00:32:21.026 00:32:21.026 real 0m15.212s 00:32:21.026 user 0m29.925s 00:32:21.026 sys 0m4.282s 00:32:21.026 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:21.026 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:21.026 ************************************ 00:32:21.026 END TEST nvmf_digest_clean 00:32:21.026 ************************************ 00:32:21.026 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:21.026 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:21.026 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:21.026 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:21.026 ************************************ 00:32:21.026 START TEST nvmf_digest_error 00:32:21.026 ************************************ 00:32:21.026 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:32:21.026 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:21.026 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:21.026 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:21.026 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:21.284 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1528790 00:32:21.284 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:21.284 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1528790 00:32:21.284 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1528790 ']' 00:32:21.284 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.284 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:21.284 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.284 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:21.284 23:38:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:21.284 [2024-07-25 23:38:18.801133] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:21.284 [2024-07-25 23:38:18.801213] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:21.284 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.284 [2024-07-25 23:38:18.839092] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:21.284 [2024-07-25 23:38:18.865126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.284 [2024-07-25 23:38:18.949199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:21.284 [2024-07-25 23:38:18.949255] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:21.284 [2024-07-25 23:38:18.949269] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:21.284 [2024-07-25 23:38:18.949280] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:21.284 [2024-07-25 23:38:18.949290] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:21.284 [2024-07-25 23:38:18.949319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.284 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:21.284 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:21.284 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:21.284 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:21.284 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:21.541 [2024-07-25 23:38:19.029875] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:21.541 null0 00:32:21.541 [2024-07-25 23:38:19.141244] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:21.541 [2024-07-25 23:38:19.165494] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1528820 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1528820 /var/tmp/bperf.sock 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1528820 ']' 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:21.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:21.541 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:21.541 [2024-07-25 23:38:19.212239] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:21.541 [2024-07-25 23:38:19.212302] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528820 ] 00:32:21.541 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.541 [2024-07-25 23:38:19.243797] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:21.799 [2024-07-25 23:38:19.273871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.799 [2024-07-25 23:38:19.368569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.799 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:21.799 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:21.799 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:21.799 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:22.057 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:22.057 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.057 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:22.057 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.057 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:22.057 23:38:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:22.316 nvme0n1 00:32:22.575 23:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:22.575 23:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.575 23:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:22.575 23:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.575 23:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:22.575 23:38:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:22.575 Running I/O for 2 seconds... 00:32:22.575 [2024-07-25 23:38:20.184027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.575 [2024-07-25 23:38:20.184089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.575 [2024-07-25 23:38:20.184125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.575 [2024-07-25 23:38:20.197728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.576 [2024-07-25 23:38:20.197766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.576 [2024-07-25 23:38:20.197786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.576 [2024-07-25 23:38:20.213994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.576 [2024-07-25 23:38:20.214033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.576 [2024-07-25 23:38:20.214054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.576 [2024-07-25 23:38:20.227427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.576 [2024-07-25 23:38:20.227464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.576 [2024-07-25 23:38:20.227483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.576 [2024-07-25 23:38:20.243296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.576 [2024-07-25 23:38:20.243344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.576 [2024-07-25 23:38:20.243372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.576 [2024-07-25 23:38:20.260660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.576 [2024-07-25 23:38:20.260696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.576 [2024-07-25 23:38:20.260716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.576 [2024-07-25 23:38:20.273720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.576 [2024-07-25 23:38:20.273756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.576 [2024-07-25 23:38:20.273775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.576 [2024-07-25 23:38:20.287067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.576 [2024-07-25 23:38:20.287116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.576 [2024-07-25 23:38:20.287133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.834 [2024-07-25 23:38:20.300332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.300384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.300405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.312539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.312575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.312594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.326142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.326172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.326188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.340044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.340089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.340109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.354042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.354085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.354105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.367461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.367496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.367519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.380931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.380967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.380987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.395134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.395164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.395180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.409746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.409783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.409802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.421940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.421976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.421995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.437201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.437233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.437251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.450191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.450225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.450243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.465859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.465896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.465916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.481758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.481794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.481822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.494880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.494917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.494936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.508735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.508771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.508790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.521836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.521875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.521895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.534813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.534849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.534868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.835 [2024-07-25 23:38:20.549910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:22.835 [2024-07-25 23:38:20.549946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.835 [2024-07-25 23:38:20.549965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.562226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.562257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.562274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.577303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.577333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.577363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.590986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.591022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.591042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.604818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.604862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.604883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.620395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.620434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.620453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.637855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.637890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.637909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.653408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.653453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.653474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.665830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.665865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.665883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.682293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.682324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.682341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.695502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.695538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.695557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.711402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.711438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.711458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.724643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.724679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.724698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.737865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.737900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.737920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.754358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.754407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.754427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.772451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.772488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.772507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.784608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.784644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.784663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.801311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.801340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.801355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.094 [2024-07-25 23:38:20.817598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.094 [2024-07-25 23:38:20.817635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.094 [2024-07-25 23:38:20.817654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.354 [2024-07-25 23:38:20.831589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.354 [2024-07-25 23:38:20.831626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.354 [2024-07-25 23:38:20.831645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.354 [2024-07-25 23:38:20.845493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.354 [2024-07-25 23:38:20.845529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.354 [2024-07-25 23:38:20.845549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.354 [2024-07-25 23:38:20.858758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.354 [2024-07-25 23:38:20.858793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.354 [2024-07-25 23:38:20.858820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.354 [2024-07-25 23:38:20.876311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.354 [2024-07-25 23:38:20.876342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.354 [2024-07-25 23:38:20.876374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.354 [2024-07-25 23:38:20.889852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.354 [2024-07-25 23:38:20.889889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.354 [2024-07-25 23:38:20.889909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.354 [2024-07-25 23:38:20.901475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.354 [2024-07-25 23:38:20.901512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.354 [2024-07-25 23:38:20.901531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.354 [2024-07-25 23:38:20.917691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.354 [2024-07-25 23:38:20.917729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.355 [2024-07-25 23:38:20.917748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.355 [2024-07-25 23:38:20.931105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.355 [2024-07-25 23:38:20.931138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.355 [2024-07-25 23:38:20.931155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.355 [2024-07-25 23:38:20.945013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.355 [2024-07-25 23:38:20.945050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.355 [2024-07-25 23:38:20.945079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.355 [2024-07-25 23:38:20.958995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.355 [2024-07-25 23:38:20.959031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.355 [2024-07-25 23:38:20.959050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.355 [2024-07-25 23:38:20.973079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.355 [2024-07-25 23:38:20.973139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.355 [2024-07-25 23:38:20.973157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.355 [2024-07-25 23:38:20.986939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.355 [2024-07-25 23:38:20.986968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.355 [2024-07-25 23:38:20.986984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.355 [2024-07-25 23:38:21.002004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.355 [2024-07-25 23:38:21.002034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.355 [2024-07-25 23:38:21.002072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.355 [2024-07-25 23:38:21.015519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.355 [2024-07-25 23:38:21.015551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.355 [2024-07-25 23:38:21.015568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.355 [2024-07-25 23:38:21.027000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.355 [2024-07-25 23:38:21.027032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.355 [2024-07-25 23:38:21.027049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.355 [2024-07-25 23:38:21.040220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.355 [2024-07-25 23:38:21.040251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.355 [2024-07-25 23:38:21.040267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.355 [2024-07-25 23:38:21.054110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.355 [2024-07-25 23:38:21.054142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.355 [2024-07-25 23:38:21.054159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.355 [2024-07-25 23:38:21.066457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.355 [2024-07-25 23:38:21.066489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.355 [2024-07-25 23:38:21.066505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.615 [2024-07-25 23:38:21.081590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.615 [2024-07-25 23:38:21.081624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.615 [2024-07-25 23:38:21.081641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.615 [2024-07-25 23:38:21.096025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.615 [2024-07-25 23:38:21.096071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.615 [2024-07-25 23:38:21.096099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.615 [2024-07-25 23:38:21.107530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.615 [2024-07-25 23:38:21.107560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.615 [2024-07-25 23:38:21.107576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.615 [2024-07-25 23:38:21.122723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.615 [2024-07-25 23:38:21.122754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.615 [2024-07-25 23:38:21.122771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.615 [2024-07-25 23:38:21.135545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.615 [2024-07-25 23:38:21.135574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.615 [2024-07-25 23:38:21.135590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.615 [2024-07-25 23:38:21.148233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.615 [2024-07-25 23:38:21.148266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.615 [2024-07-25 23:38:21.148283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.615 [2024-07-25 23:38:21.160702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.615 [2024-07-25 23:38:21.160734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.615 [2024-07-25 23:38:21.160751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.615 [2024-07-25 23:38:21.173611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.615 [2024-07-25 23:38:21.173644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.615 [2024-07-25 23:38:21.173661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.615 [2024-07-25 23:38:21.185468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.615 [2024-07-25 23:38:21.185497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.615 [2024-07-25 23:38:21.185513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.616 [2024-07-25 23:38:21.198444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.616 [2024-07-25 23:38:21.198474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.616 [2024-07-25 23:38:21.198491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.616 [2024-07-25 23:38:21.210569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.616 [2024-07-25 23:38:21.210609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.616 [2024-07-25 23:38:21.210627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.616 [2024-07-25 23:38:21.221841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.616 [2024-07-25 23:38:21.221871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.616 [2024-07-25 23:38:21.221887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.616 [2024-07-25 23:38:21.236044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.616 [2024-07-25 23:38:21.236099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.616 [2024-07-25 23:38:21.236122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.616 [2024-07-25 23:38:21.246662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.616 [2024-07-25 23:38:21.246691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.616 [2024-07-25 23:38:21.246706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.616 [2024-07-25 23:38:21.261578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.616 [2024-07-25 23:38:21.261608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.616 [2024-07-25 23:38:21.261624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.616 [2024-07-25 23:38:21.274660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.616 [2024-07-25 23:38:21.274693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.616 [2024-07-25 23:38:21.274710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.616 [2024-07-25 23:38:21.288797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.616 [2024-07-25 23:38:21.288826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.616 [2024-07-25 23:38:21.288842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.616 [2024-07-25 23:38:21.301150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.616 [2024-07-25 23:38:21.301182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.616 [2024-07-25 23:38:21.301199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.616 [2024-07-25 23:38:21.313013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.616 [2024-07-25 23:38:21.313046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.616 [2024-07-25 23:38:21.313071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.616 [2024-07-25 23:38:21.326115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.616 [2024-07-25 23:38:21.326145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.616 [2024-07-25 23:38:21.326162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.616 [2024-07-25 23:38:21.338651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.616 [2024-07-25 23:38:21.338683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.616 [2024-07-25 23:38:21.338701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.876 [2024-07-25 23:38:21.350935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.876 [2024-07-25 23:38:21.350982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.876 [2024-07-25 23:38:21.351000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.876 [2024-07-25 23:38:21.364818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.876 [2024-07-25 23:38:21.364847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.876 [2024-07-25 23:38:21.364863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.876 [2024-07-25 23:38:21.376879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.876 [2024-07-25 23:38:21.376920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.876 [2024-07-25 23:38:21.376939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.876 [2024-07-25 23:38:21.389389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.876 [2024-07-25 23:38:21.389431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.876 [2024-07-25 23:38:21.389449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.876 [2024-07-25 23:38:21.401128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.876 [2024-07-25 23:38:21.401163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.876 [2024-07-25 23:38:21.401196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.876 [2024-07-25 23:38:21.415687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.876 [2024-07-25 23:38:21.415733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.876 [2024-07-25 23:38:21.415750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.876 [2024-07-25 23:38:21.427700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.876 [2024-07-25 23:38:21.427732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.877 [2024-07-25 23:38:21.427760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.877 [2024-07-25 23:38:21.438446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.877 [2024-07-25 23:38:21.438492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.877 [2024-07-25 23:38:21.438508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.877 [2024-07-25 23:38:21.453166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.877 [2024-07-25 23:38:21.453195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.877 [2024-07-25 23:38:21.453211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.877 [2024-07-25 23:38:21.464865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.877 [2024-07-25 23:38:21.464897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.877 [2024-07-25 23:38:21.464915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.877 [2024-07-25 23:38:21.478774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.877 [2024-07-25 23:38:21.478807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.877 [2024-07-25 23:38:21.478825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.877 [2024-07-25 23:38:21.490279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.877 [2024-07-25 23:38:21.490311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.877 [2024-07-25 23:38:21.490328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.877 [2024-07-25 23:38:21.505301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.877 [2024-07-25 23:38:21.505333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.877 [2024-07-25 23:38:21.505351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.877 [2024-07-25 23:38:21.515954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.877 [2024-07-25 23:38:21.515984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.877 [2024-07-25 23:38:21.516000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.877 [2024-07-25 23:38:21.531301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.877 [2024-07-25 23:38:21.531334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.877 [2024-07-25 23:38:21.531352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.877 [2024-07-25 23:38:21.545010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.877 [2024-07-25 23:38:21.545051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.877 [2024-07-25 23:38:21.545094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.877 [2024-07-25 23:38:21.558267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.877 [2024-07-25 23:38:21.558298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.877 [2024-07-25 23:38:21.558315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.877 [2024-07-25 23:38:21.568593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.877 [2024-07-25 23:38:21.568623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.877 [2024-07-25 23:38:21.568639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.877 [2024-07-25 23:38:21.582862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.877 [2024-07-25 23:38:21.582891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.877 [2024-07-25 23:38:21.582907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.877 [2024-07-25 23:38:21.596283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:23.877 [2024-07-25 23:38:21.596315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.877 [2024-07-25 23:38:21.596332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.137 [2024-07-25 23:38:21.608382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.137 [2024-07-25 23:38:21.608427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.137 [2024-07-25 23:38:21.608443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.137 [2024-07-25 23:38:21.624107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.137 [2024-07-25 23:38:21.624140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.137 [2024-07-25 23:38:21.624157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.137 [2024-07-25 23:38:21.640151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.137 [2024-07-25 23:38:21.640183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.137 [2024-07-25 23:38:21.640201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.137 [2024-07-25 23:38:21.650643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.137 [2024-07-25 23:38:21.650688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.137 [2024-07-25 23:38:21.650717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.137 [2024-07-25 23:38:21.665583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.137 [2024-07-25 23:38:21.665614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.137 [2024-07-25 23:38:21.665630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.137 [2024-07-25 23:38:21.679790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.137 [2024-07-25 23:38:21.679820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.137 [2024-07-25 23:38:21.679835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.137 [2024-07-25 23:38:21.694810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.137 [2024-07-25 23:38:21.694847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.137 [2024-07-25 23:38:21.694866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.137 [2024-07-25 23:38:21.706852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.137 [2024-07-25 23:38:21.706881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.137 [2024-07-25 23:38:21.706896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.137 [2024-07-25 23:38:21.721902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.137 [2024-07-25 23:38:21.721935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.137 [2024-07-25 23:38:21.721952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.137 [2024-07-25 23:38:21.736517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.137 [2024-07-25 23:38:21.736564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.138 [2024-07-25 23:38:21.736581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.138 [2024-07-25 23:38:21.748735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.138 [2024-07-25 23:38:21.748765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.138 [2024-07-25 23:38:21.748781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.138 [2024-07-25 23:38:21.762040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.138 [2024-07-25 23:38:21.762084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.138 [2024-07-25 23:38:21.762103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.138 [2024-07-25 23:38:21.778676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.138 [2024-07-25 23:38:21.778723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.138 [2024-07-25 23:38:21.778741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.138 [2024-07-25 23:38:21.794126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.138 [2024-07-25 23:38:21.794158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.138 [2024-07-25 23:38:21.794175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.138 [2024-07-25 23:38:21.804892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.138 [2024-07-25 23:38:21.804923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.138 [2024-07-25 23:38:21.804940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.138 [2024-07-25 23:38:21.818555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.138 [2024-07-25 23:38:21.818585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.138 [2024-07-25 23:38:21.818615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.138 [2024-07-25 23:38:21.831720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.138 [2024-07-25 23:38:21.831767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.138 [2024-07-25 23:38:21.831795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.138 [2024-07-25 23:38:21.843780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.138 [2024-07-25 23:38:21.843812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.138 [2024-07-25 23:38:21.843830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.138 [2024-07-25 23:38:21.854503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.138 [2024-07-25 23:38:21.854532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.138 [2024-07-25 23:38:21.854548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.398 [2024-07-25 23:38:21.869019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:21.869053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:21.869079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:21.880543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:21.880573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:21.880589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:21.894678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:21.894708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:21.894725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:21.908714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:21.908746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:21.908763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:21.923236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:21.923274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:21.923292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:21.935279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:21.935309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:21.935326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:21.948101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:21.948134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:21.948151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:21.960871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:21.960903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:21.960920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:21.972771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:21.972804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:21.972837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:21.985008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:21.985038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:21.985055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:21.997328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:21.997359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:21.997385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:22.010073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:22.010113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:22.010129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:22.023354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:22.023385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:22.023402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:22.037581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:22.037621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:22.037639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:22.049530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:22.049562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:22.049579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:22.063377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:22.063421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:22.063438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:22.077928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:22.077960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:22.077978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:22.090000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:22.090030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:22.090045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:22.103027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:22.103079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:22.103097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.399 [2024-07-25 23:38:22.118561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.399 [2024-07-25 23:38:22.118601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.399 [2024-07-25 23:38:22.118618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.658 [2024-07-25 23:38:22.129159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.658 [2024-07-25 23:38:22.129190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.658 [2024-07-25 23:38:22.129207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.658 [2024-07-25 23:38:22.144496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.658 [2024-07-25 23:38:22.144528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.658 [2024-07-25 23:38:22.144545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.658 [2024-07-25 23:38:22.159240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.658 [2024-07-25 23:38:22.159273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.658 [2024-07-25 23:38:22.159295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.658 [2024-07-25 23:38:22.169167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2255280) 00:32:24.658 [2024-07-25 23:38:22.169199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.658 [2024-07-25 23:38:22.169215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.658 00:32:24.658 Latency(us) 00:32:24.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.658 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:24.658 nvme0n1 : 2.01 18757.41 73.27 0.00 0.00 6813.26 3325.35 24369.68 00:32:24.658 =================================================================================================================== 00:32:24.658 Total : 18757.41 73.27 0.00 0.00 6813.26 3325.35 24369.68 00:32:24.658 0 00:32:24.658 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:24.658 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:24.658 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:24.658 | .driver_specific 00:32:24.658 | .nvme_error 00:32:24.658 | .status_code 00:32:24.658 | .command_transient_transport_error' 00:32:24.658 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:24.918 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 147 > 0 )) 00:32:24.918 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1528820 00:32:24.918 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1528820 ']' 00:32:24.918 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1528820 00:32:24.918 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:24.918 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:24.918 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1528820 00:32:24.918 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:24.918 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:24.918 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1528820' 00:32:24.918 killing process with pid 1528820 00:32:24.918 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1528820 00:32:24.918 Received shutdown signal, test time was about 2.000000 seconds 00:32:24.918 00:32:24.918 Latency(us) 00:32:24.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.918 =================================================================================================================== 00:32:24.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:24.918 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1528820 00:32:25.176 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:25.176 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:25.176 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:25.176 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:25.176 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:25.176 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1529220 00:32:25.176 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:25.176 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1529220 /var/tmp/bperf.sock 00:32:25.176 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1529220 ']' 00:32:25.176 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:25.176 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:25.176 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:25.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:25.176 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:25.176 23:38:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:25.176 [2024-07-25 23:38:22.742774] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:25.176 [2024-07-25 23:38:22.742862] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1529220 ] 00:32:25.176 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:25.176 Zero copy mechanism will not be used. 00:32:25.176 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.176 [2024-07-25 23:38:22.774234] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:25.176 [2024-07-25 23:38:22.806196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.176 [2024-07-25 23:38:22.896179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.434 23:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:25.434 23:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:25.434 23:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:25.434 23:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:25.692 23:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:25.692 23:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.692 23:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:25.692 23:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.692 23:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:25.692 23:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:25.949 nvme0n1 00:32:25.949 23:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:25.949 23:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.949 23:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:25.949 23:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.949 23:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:25.949 23:38:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:26.206 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:26.206 Zero copy mechanism will not be used. 00:32:26.206 Running I/O for 2 seconds... 00:32:26.206 [2024-07-25 23:38:23.700522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.206 [2024-07-25 23:38:23.700578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.206 [2024-07-25 23:38:23.700604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.206 [2024-07-25 23:38:23.708615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.206 [2024-07-25 23:38:23.708652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.206 [2024-07-25 23:38:23.708682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.206 [2024-07-25 23:38:23.716687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.206 [2024-07-25 23:38:23.716723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.206 [2024-07-25 23:38:23.716747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.206 [2024-07-25 23:38:23.723972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.206 [2024-07-25 23:38:23.724017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.206 [2024-07-25 23:38:23.724041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.206 [2024-07-25 23:38:23.730892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.206 [2024-07-25 23:38:23.730927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.206 [2024-07-25 23:38:23.730950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.206 [2024-07-25 23:38:23.738095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.206 [2024-07-25 23:38:23.738139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.206 [2024-07-25 23:38:23.738155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.206 [2024-07-25 23:38:23.745342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.206 [2024-07-25 23:38:23.745389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.206 [2024-07-25 23:38:23.745408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.206 [2024-07-25 23:38:23.752737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.206 [2024-07-25 23:38:23.752772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.206 [2024-07-25 23:38:23.752790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.206 [2024-07-25 23:38:23.759729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.206 [2024-07-25 23:38:23.759765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.206 [2024-07-25 23:38:23.759784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.206 [2024-07-25 23:38:23.766954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.206 [2024-07-25 23:38:23.766988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.206 [2024-07-25 23:38:23.767011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.206 [2024-07-25 23:38:23.774213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.206 [2024-07-25 23:38:23.774245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.206 [2024-07-25 23:38:23.774264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.206 [2024-07-25 23:38:23.781508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.206 [2024-07-25 23:38:23.781542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.206 [2024-07-25 23:38:23.781561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.206 [2024-07-25 23:38:23.788834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.788868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.788887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.795969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.796002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.796021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.803293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.803322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.803340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.810604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.810637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.810656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.817946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.817979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.818000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.825460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.825496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.825520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.832724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.832757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.832776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.839880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.839913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.839932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.847172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.847203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.847230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.854633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.854667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.854685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.861973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.862006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.862026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.869235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.869265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.869285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.876433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.876469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.876488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.883722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.883756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.883775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.890859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.890892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.890916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.898094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.898142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.898163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.906011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.906066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.906088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.913412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.913463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.913483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.920751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.920786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.920816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.207 [2024-07-25 23:38:23.927971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.207 [2024-07-25 23:38:23.928005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.207 [2024-07-25 23:38:23.928026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.465 [2024-07-25 23:38:23.935296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.465 [2024-07-25 23:38:23.935328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.465 [2024-07-25 23:38:23.935368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.465 [2024-07-25 23:38:23.942481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:23.942514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:23.942538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:23.949739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:23.949773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:23.949797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:23.957025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:23.957079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:23.957100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:23.964103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:23.964137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:23.964154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:23.971093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:23.971140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:23.971164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:23.978686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:23.978722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:23.978741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:23.986086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:23.986135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:23.986154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:23.993237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:23.993268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:23.993285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:24.000545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:24.000580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:24.000599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:24.007771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:24.007805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:24.007824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:24.015025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:24.015079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:24.015099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:24.022401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:24.022435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:24.022455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:24.029696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:24.029730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:24.029749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:24.036903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:24.036944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:24.036964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:24.044283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:24.044313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:24.044340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:24.051561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:24.051595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:24.051614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:24.058872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:24.058906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:24.058924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:24.066192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:24.066222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:24.066241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:24.073436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:24.073470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:24.073489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:24.080564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:24.080598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.466 [2024-07-25 23:38:24.080617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.466 [2024-07-25 23:38:24.087951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.466 [2024-07-25 23:38:24.087986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.467 [2024-07-25 23:38:24.088005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.467 [2024-07-25 23:38:24.095177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.467 [2024-07-25 23:38:24.095208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.467 [2024-07-25 23:38:24.095240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.467 [2024-07-25 23:38:24.102564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.467 [2024-07-25 23:38:24.102599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.467 [2024-07-25 23:38:24.102618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.467 [2024-07-25 23:38:24.110041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.467 [2024-07-25 23:38:24.110083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.467 [2024-07-25 23:38:24.110124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.467 [2024-07-25 23:38:24.117297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.467 [2024-07-25 23:38:24.117328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.467 [2024-07-25 23:38:24.117347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.467 [2024-07-25 23:38:24.124669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.467 [2024-07-25 23:38:24.124702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.467 [2024-07-25 23:38:24.124721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.467 [2024-07-25 23:38:24.132228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.467 [2024-07-25 23:38:24.132260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.467 [2024-07-25 23:38:24.132278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.467 [2024-07-25 23:38:24.139424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.467 [2024-07-25 23:38:24.139458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.467 [2024-07-25 23:38:24.139477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.467 [2024-07-25 23:38:24.146549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.467 [2024-07-25 23:38:24.146583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.467 [2024-07-25 23:38:24.146603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.467 [2024-07-25 23:38:24.153603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.467 [2024-07-25 23:38:24.153637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.467 [2024-07-25 23:38:24.153665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.467 [2024-07-25 23:38:24.161016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.467 [2024-07-25 23:38:24.161051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.467 [2024-07-25 23:38:24.161091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.467 [2024-07-25 23:38:24.168625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.467 [2024-07-25 23:38:24.168659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.467 [2024-07-25 23:38:24.168678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.467 [2024-07-25 23:38:24.175761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.467 [2024-07-25 23:38:24.175794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.467 [2024-07-25 23:38:24.175814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.467 [2024-07-25 23:38:24.183046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.467 [2024-07-25 23:38:24.183088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.467 [2024-07-25 23:38:24.183122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.728 [2024-07-25 23:38:24.190440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.728 [2024-07-25 23:38:24.190475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.728 [2024-07-25 23:38:24.190497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.728 [2024-07-25 23:38:24.197835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.728 [2024-07-25 23:38:24.197869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.728 [2024-07-25 23:38:24.197894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.728 [2024-07-25 23:38:24.205127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.728 [2024-07-25 23:38:24.205172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.728 [2024-07-25 23:38:24.205190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.728 [2024-07-25 23:38:24.213162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.728 [2024-07-25 23:38:24.213193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.728 [2024-07-25 23:38:24.213224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.728 [2024-07-25 23:38:24.220416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.728 [2024-07-25 23:38:24.220453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.728 [2024-07-25 23:38:24.220472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.728 [2024-07-25 23:38:24.227705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.728 [2024-07-25 23:38:24.227746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.728 [2024-07-25 23:38:24.227767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.728 [2024-07-25 23:38:24.234866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.728 [2024-07-25 23:38:24.234900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.234920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.242098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.242152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.242178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.249316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.249365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.249384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.256619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.256653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.256671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.263812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.263845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.263864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.271198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.271230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.271247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.278457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.278491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.278510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.286141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.286172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.286189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.293810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.293844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.293864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.301173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.301204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.301221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.308333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.308365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.308383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.315444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.315478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.315497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.322838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.322873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.322892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.330180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.330210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.330228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.337698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.337733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.337752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.344922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.344957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.344975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.352290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.352320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.352357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.359537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.359571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.359590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.367175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.367205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.367222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.729 [2024-07-25 23:38:24.374497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.729 [2024-07-25 23:38:24.374532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.729 [2024-07-25 23:38:24.374551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.730 [2024-07-25 23:38:24.381671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.730 [2024-07-25 23:38:24.381705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.730 [2024-07-25 23:38:24.381724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.730 [2024-07-25 23:38:24.388846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.730 [2024-07-25 23:38:24.388879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.730 [2024-07-25 23:38:24.388898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.730 [2024-07-25 23:38:24.396011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.730 [2024-07-25 23:38:24.396046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.730 [2024-07-25 23:38:24.396074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.730 [2024-07-25 23:38:24.403335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.730 [2024-07-25 23:38:24.403367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.730 [2024-07-25 23:38:24.403401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.730 [2024-07-25 23:38:24.410521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.730 [2024-07-25 23:38:24.410555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.730 [2024-07-25 23:38:24.410574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.730 [2024-07-25 23:38:24.418153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.730 [2024-07-25 23:38:24.418185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.730 [2024-07-25 23:38:24.418202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.730 [2024-07-25 23:38:24.425519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.730 [2024-07-25 23:38:24.425553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.730 [2024-07-25 23:38:24.425572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.730 [2024-07-25 23:38:24.432684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.730 [2024-07-25 23:38:24.432719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.730 [2024-07-25 23:38:24.432737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.730 [2024-07-25 23:38:24.439956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.730 [2024-07-25 23:38:24.439990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.730 [2024-07-25 23:38:24.440009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.730 [2024-07-25 23:38:24.447281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.730 [2024-07-25 23:38:24.447314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.730 [2024-07-25 23:38:24.447332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.992 [2024-07-25 23:38:24.454663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.992 [2024-07-25 23:38:24.454697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.992 [2024-07-25 23:38:24.454717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.992 [2024-07-25 23:38:24.461925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.992 [2024-07-25 23:38:24.461959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.992 [2024-07-25 23:38:24.461978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.992 [2024-07-25 23:38:24.469102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.992 [2024-07-25 23:38:24.469133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.992 [2024-07-25 23:38:24.469150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.992 [2024-07-25 23:38:24.476337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.992 [2024-07-25 23:38:24.476370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.992 [2024-07-25 23:38:24.476411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.992 [2024-07-25 23:38:24.483419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.992 [2024-07-25 23:38:24.483455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.992 [2024-07-25 23:38:24.483473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.992 [2024-07-25 23:38:24.490586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.992 [2024-07-25 23:38:24.490621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.992 [2024-07-25 23:38:24.490640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.992 [2024-07-25 23:38:24.497797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.992 [2024-07-25 23:38:24.497831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.992 [2024-07-25 23:38:24.497850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.992 [2024-07-25 23:38:24.505006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.992 [2024-07-25 23:38:24.505040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.992 [2024-07-25 23:38:24.505066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.992 [2024-07-25 23:38:24.512375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.992 [2024-07-25 23:38:24.512410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.992 [2024-07-25 23:38:24.512428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.992 [2024-07-25 23:38:24.519889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.992 [2024-07-25 23:38:24.519923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.992 [2024-07-25 23:38:24.519943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.992 [2024-07-25 23:38:24.527257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.992 [2024-07-25 23:38:24.527287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.992 [2024-07-25 23:38:24.527305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.992 [2024-07-25 23:38:24.534464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.992 [2024-07-25 23:38:24.534499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.992 [2024-07-25 23:38:24.534518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.992 [2024-07-25 23:38:24.541686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.992 [2024-07-25 23:38:24.541725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.992 [2024-07-25 23:38:24.541745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.992 [2024-07-25 23:38:24.548897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.992 [2024-07-25 23:38:24.548930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.992 [2024-07-25 23:38:24.548949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.992 [2024-07-25 23:38:24.555947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.992 [2024-07-25 23:38:24.555981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.992 [2024-07-25 23:38:24.555999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.992 [2024-07-25 23:38:24.563178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.992 [2024-07-25 23:38:24.563209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.992 [2024-07-25 23:38:24.563226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.992 [2024-07-25 23:38:24.570547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.992 [2024-07-25 23:38:24.570582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.992 [2024-07-25 23:38:24.570600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.577806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.577840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.577859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.584966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.585001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.585019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.592125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.592171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.592188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.599302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.599333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.599350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.606604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.606637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.606656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.613816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.613850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.613870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.621168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.621199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.621216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.628293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.628324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.628341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.635533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.635567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.635586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.642815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.642849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.642868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.650084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.650117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.650136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.657999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.658034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.658053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.666033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.666076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.666118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.673991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.674026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.674046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.681331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.681363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.681396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.688456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.688490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.688509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.695533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.695567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.695586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.702808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.702846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.702866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.993 [2024-07-25 23:38:24.709952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:26.993 [2024-07-25 23:38:24.709985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.993 [2024-07-25 23:38:24.710004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.253 [2024-07-25 23:38:24.717318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.253 [2024-07-25 23:38:24.717363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.253 [2024-07-25 23:38:24.717380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.253 [2024-07-25 23:38:24.724533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.253 [2024-07-25 23:38:24.724566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.253 [2024-07-25 23:38:24.724585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.253 [2024-07-25 23:38:24.731883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.253 [2024-07-25 23:38:24.731925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.253 [2024-07-25 23:38:24.731946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.253 [2024-07-25 23:38:24.739172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.253 [2024-07-25 23:38:24.739218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.253 [2024-07-25 23:38:24.739235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.253 [2024-07-25 23:38:24.746411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.746445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.746465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.753690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.753724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.753744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.760893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.760927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.760946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.768147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.768178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.768195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.775657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.775691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.775711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.782980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.783015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.783034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.790155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.790185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.790203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.797357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.797391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.797410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.804585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.804619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.804637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.812034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.812076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.812097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.819413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.819447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.819466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.826542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.826576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.826595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.833777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.833810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.833829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.840983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.841017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.841036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.848235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.848265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.848282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.855505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.855539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.855563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.862796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.862829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.862847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.870194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.870226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.870244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.877345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.877375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.877392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.884710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.884744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.884763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.254 [2024-07-25 23:38:24.891845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.254 [2024-07-25 23:38:24.891880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.254 [2024-07-25 23:38:24.891899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.255 [2024-07-25 23:38:24.899155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.255 [2024-07-25 23:38:24.899185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.255 [2024-07-25 23:38:24.899202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.255 [2024-07-25 23:38:24.906367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.255 [2024-07-25 23:38:24.906401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.255 [2024-07-25 23:38:24.906419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.255 [2024-07-25 23:38:24.913094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.255 [2024-07-25 23:38:24.913142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.255 [2024-07-25 23:38:24.913159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.255 [2024-07-25 23:38:24.918150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.255 [2024-07-25 23:38:24.918196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.255 [2024-07-25 23:38:24.918213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.255 [2024-07-25 23:38:24.927728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.255 [2024-07-25 23:38:24.927761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.255 [2024-07-25 23:38:24.927781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.255 [2024-07-25 23:38:24.937131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.255 [2024-07-25 23:38:24.937162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.255 [2024-07-25 23:38:24.937179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.255 [2024-07-25 23:38:24.946382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.255 [2024-07-25 23:38:24.946430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.255 [2024-07-25 23:38:24.946450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.255 [2024-07-25 23:38:24.955932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.255 [2024-07-25 23:38:24.955967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.255 [2024-07-25 23:38:24.955986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.255 [2024-07-25 23:38:24.965275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.255 [2024-07-25 23:38:24.965319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.255 [2024-07-25 23:38:24.965336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.255 [2024-07-25 23:38:24.974925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.255 [2024-07-25 23:38:24.974962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.255 [2024-07-25 23:38:24.974982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.514 [2024-07-25 23:38:24.983442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.514 [2024-07-25 23:38:24.983481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.514 [2024-07-25 23:38:24.983501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.514 [2024-07-25 23:38:24.992835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.514 [2024-07-25 23:38:24.992873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.514 [2024-07-25 23:38:24.992899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.514 [2024-07-25 23:38:25.002299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.514 [2024-07-25 23:38:25.002332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.514 [2024-07-25 23:38:25.002365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.514 [2024-07-25 23:38:25.012231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.514 [2024-07-25 23:38:25.012264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.514 [2024-07-25 23:38:25.012297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.514 [2024-07-25 23:38:25.022466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.514 [2024-07-25 23:38:25.022503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.514 [2024-07-25 23:38:25.022522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.514 [2024-07-25 23:38:25.032162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.514 [2024-07-25 23:38:25.032208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.514 [2024-07-25 23:38:25.032226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.514 [2024-07-25 23:38:25.039713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.514 [2024-07-25 23:38:25.039749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.514 [2024-07-25 23:38:25.039769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.514 [2024-07-25 23:38:25.048271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.514 [2024-07-25 23:38:25.048304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.514 [2024-07-25 23:38:25.048321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.514 [2024-07-25 23:38:25.056280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.514 [2024-07-25 23:38:25.056329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.514 [2024-07-25 23:38:25.056346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.514 [2024-07-25 23:38:25.064264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.514 [2024-07-25 23:38:25.064294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.514 [2024-07-25 23:38:25.064310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.514 [2024-07-25 23:38:25.072301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.514 [2024-07-25 23:38:25.072353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.514 [2024-07-25 23:38:25.072375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.514 [2024-07-25 23:38:25.080478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.514 [2024-07-25 23:38:25.080523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.514 [2024-07-25 23:38:25.080542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.514 [2024-07-25 23:38:25.088117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.514 [2024-07-25 23:38:25.088149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.514 [2024-07-25 23:38:25.088166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.514 [2024-07-25 23:38:25.095411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.095442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.095475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.102612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.102647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.102666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.110275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.110320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.110336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.118010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.118044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.118070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.125296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.125341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.125358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.133245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.133276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.133293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.140915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.140950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.140970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.148788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.148832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.148851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.156908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.156949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.156970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.164355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.164387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.164405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.171683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.171718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.171738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.179034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.179080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.179115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.187286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.187318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.187335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.195332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.195363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.195380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.203260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.203290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.203326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.211340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.211372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.211405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.219045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.219088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.219133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.226452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.226486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.226505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.515 [2024-07-25 23:38:25.234389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.515 [2024-07-25 23:38:25.234426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.515 [2024-07-25 23:38:25.234445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.774 [2024-07-25 23:38:25.243937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.774 [2024-07-25 23:38:25.243973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.774 [2024-07-25 23:38:25.243993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.774 [2024-07-25 23:38:25.252164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.774 [2024-07-25 23:38:25.252196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.774 [2024-07-25 23:38:25.252215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.774 [2024-07-25 23:38:25.260040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.774 [2024-07-25 23:38:25.260083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.774 [2024-07-25 23:38:25.260104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.774 [2024-07-25 23:38:25.267227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.774 [2024-07-25 23:38:25.267258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.774 [2024-07-25 23:38:25.267275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.774 [2024-07-25 23:38:25.274474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.774 [2024-07-25 23:38:25.274514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.774 [2024-07-25 23:38:25.274534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.281718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.281752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.281771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.288892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.288926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.288945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.296174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.296218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.296234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.303485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.303519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.303538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.310894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.310928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.310946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.318052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.318108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.318127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.325298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.325328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.325345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.332417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.332451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.332470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.339655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.339688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.339706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.346830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.346863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.346882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.353990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.354024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.354042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.361263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.361293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.361310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.368475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.368509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.368528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.375919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.375953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.375982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.383451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.383489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.383508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.390847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.390880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.390898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.397951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.397984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.398015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.405094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.405144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.405164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.412255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.412289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.412306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.419572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.775 [2024-07-25 23:38:25.419606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.775 [2024-07-25 23:38:25.419636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.775 [2024-07-25 23:38:25.426801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.776 [2024-07-25 23:38:25.426834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.776 [2024-07-25 23:38:25.426854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.776 [2024-07-25 23:38:25.433934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.776 [2024-07-25 23:38:25.433967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.776 [2024-07-25 23:38:25.433986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.776 [2024-07-25 23:38:25.441272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.776 [2024-07-25 23:38:25.441317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.776 [2024-07-25 23:38:25.441336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.776 [2024-07-25 23:38:25.448654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.776 [2024-07-25 23:38:25.448687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.776 [2024-07-25 23:38:25.448717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.776 [2024-07-25 23:38:25.455914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.776 [2024-07-25 23:38:25.455947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.776 [2024-07-25 23:38:25.455967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.776 [2024-07-25 23:38:25.463335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.776 [2024-07-25 23:38:25.463384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.776 [2024-07-25 23:38:25.463402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.776 [2024-07-25 23:38:25.470444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.776 [2024-07-25 23:38:25.470478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.776 [2024-07-25 23:38:25.470497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.776 [2024-07-25 23:38:25.477787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.776 [2024-07-25 23:38:25.477821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.776 [2024-07-25 23:38:25.477839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.776 [2024-07-25 23:38:25.485048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.776 [2024-07-25 23:38:25.485105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.776 [2024-07-25 23:38:25.485124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.776 [2024-07-25 23:38:25.492481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:27.776 [2024-07-25 23:38:25.492516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.776 [2024-07-25 23:38:25.492535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.034 [2024-07-25 23:38:25.499739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.034 [2024-07-25 23:38:25.499773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.034 [2024-07-25 23:38:25.499796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.034 [2024-07-25 23:38:25.506991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.034 [2024-07-25 23:38:25.507025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.034 [2024-07-25 23:38:25.507048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.034 [2024-07-25 23:38:25.514286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.034 [2024-07-25 23:38:25.514317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.034 [2024-07-25 23:38:25.514334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.034 [2024-07-25 23:38:25.521625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.034 [2024-07-25 23:38:25.521658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.034 [2024-07-25 23:38:25.521686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.034 [2024-07-25 23:38:25.528892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.034 [2024-07-25 23:38:25.528925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.034 [2024-07-25 23:38:25.528945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.034 [2024-07-25 23:38:25.536041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.034 [2024-07-25 23:38:25.536081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.034 [2024-07-25 23:38:25.536120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.034 [2024-07-25 23:38:25.543376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.034 [2024-07-25 23:38:25.543409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.034 [2024-07-25 23:38:25.543428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.034 [2024-07-25 23:38:25.550530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.034 [2024-07-25 23:38:25.550563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.034 [2024-07-25 23:38:25.550582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.034 [2024-07-25 23:38:25.557707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.557739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.557764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.564939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.564972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.564990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.572829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.572864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.572888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.579945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.579979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.580003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.587187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.587222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.587240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.594443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.594477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.594495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.601661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.601695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.601714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.608779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.608811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.608830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.615996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.616029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.616049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.623578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.623613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.623631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.630786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.630820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.630839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.638057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.638111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.638130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.645270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.645300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.645320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.652504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.652538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.652556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.659736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.659769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.659787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.666904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.666937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.666955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.674042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.674083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.674118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.682545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.682580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.682599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.035 [2024-07-25 23:38:25.691948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x182b390) 00:32:28.035 [2024-07-25 23:38:25.691983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.035 [2024-07-25 23:38:25.692002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.035 00:32:28.035 Latency(us) 00:32:28.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.035 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:28.035 nvme0n1 : 2.00 4164.72 520.59 0.00 0.00 3837.48 892.02 10194.49 00:32:28.035 =================================================================================================================== 00:32:28.035 Total : 4164.72 520.59 0.00 0.00 3837.48 892.02 10194.49 00:32:28.035 0 00:32:28.036 23:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:28.036 23:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:28.036 23:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:28.036 23:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:28.036 | .driver_specific 00:32:28.036 | .nvme_error 00:32:28.036 | .status_code 00:32:28.036 | .command_transient_transport_error' 00:32:28.294 23:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 268 > 0 )) 00:32:28.294 23:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1529220 00:32:28.294 23:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1529220 ']' 00:32:28.294 23:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1529220 00:32:28.294 23:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:28.294 23:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:28.294 23:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1529220 00:32:28.294 23:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:28.294 23:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:28.295 23:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1529220' 00:32:28.295 killing process with pid 1529220 00:32:28.295 23:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1529220 00:32:28.295 Received shutdown signal, test time was about 2.000000 seconds 00:32:28.295 00:32:28.295 Latency(us) 00:32:28.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.295 =================================================================================================================== 00:32:28.295 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:28.295 23:38:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1529220 00:32:28.595 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:28.595 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:28.595 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:28.595 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:28.595 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:28.595 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1529632 00:32:28.595 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1529632 /var/tmp/bperf.sock 00:32:28.595 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:28.595 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1529632 ']' 00:32:28.595 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:28.595 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:28.595 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:28.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:28.595 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:28.595 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:28.595 [2024-07-25 23:38:26.233170] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:28.595 [2024-07-25 23:38:26.233251] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1529632 ] 00:32:28.595 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.595 [2024-07-25 23:38:26.271668] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:28.854 [2024-07-25 23:38:26.302509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.854 [2024-07-25 23:38:26.394971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.855 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:28.855 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:28.855 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:28.855 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:29.113 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:29.113 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.113 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:29.113 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.113 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:29.113 23:38:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:29.680 nvme0n1 00:32:29.680 23:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:29.680 23:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.680 23:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:29.680 23:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.680 23:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:29.680 23:38:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:29.680 Running I/O for 2 seconds... 00:32:29.680 [2024-07-25 23:38:27.319941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190edd58 00:32:29.680 [2024-07-25 23:38:27.321083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.680 [2024-07-25 23:38:27.321135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:29.680 [2024-07-25 23:38:27.332270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fa3a0 00:32:29.680 [2024-07-25 23:38:27.333362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.680 [2024-07-25 23:38:27.333406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:29.680 [2024-07-25 23:38:27.346602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fdeb0 00:32:29.680 [2024-07-25 23:38:27.347891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.680 [2024-07-25 23:38:27.347938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:29.680 [2024-07-25 23:38:27.358613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e4578 00:32:29.680 [2024-07-25 23:38:27.359921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.680 [2024-07-25 23:38:27.359965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:29.680 [2024-07-25 23:38:27.372206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ec408 00:32:29.680 [2024-07-25 23:38:27.373542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.680 [2024-07-25 23:38:27.373587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:29.680 [2024-07-25 23:38:27.385566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e6300 00:32:29.680 [2024-07-25 23:38:27.387231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.680 [2024-07-25 23:38:27.387276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:29.680 [2024-07-25 23:38:27.398947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fb480 00:32:29.680 [2024-07-25 23:38:27.400781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.680 [2024-07-25 23:38:27.400825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:29.939 [2024-07-25 23:38:27.412498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f5378 00:32:29.939 [2024-07-25 23:38:27.414512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.939 [2024-07-25 23:38:27.414555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:29.939 [2024-07-25 23:38:27.425828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f4298 00:32:29.939 [2024-07-25 23:38:27.427975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.939 [2024-07-25 23:38:27.428003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:29.939 [2024-07-25 23:38:27.434921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190eaef0 00:32:29.939 [2024-07-25 23:38:27.435849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.939 [2024-07-25 23:38:27.435903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:29.939 [2024-07-25 23:38:27.446943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ec840 00:32:29.939 [2024-07-25 23:38:27.447864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.939 [2024-07-25 23:38:27.447907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:29.939 [2024-07-25 23:38:27.460271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fb048 00:32:29.939 [2024-07-25 23:38:27.461353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.939 [2024-07-25 23:38:27.461396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:29.939 [2024-07-25 23:38:27.474484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ea248 00:32:29.939 [2024-07-25 23:38:27.475746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.939 [2024-07-25 23:38:27.475792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:29.940 [2024-07-25 23:38:27.487518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190eea00 00:32:29.940 [2024-07-25 23:38:27.488942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.940 [2024-07-25 23:38:27.488990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:29.940 [2024-07-25 23:38:27.499385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e4de8 00:32:29.940 [2024-07-25 23:38:27.500814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.940 [2024-07-25 23:38:27.500841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:29.940 [2024-07-25 23:38:27.512563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e1b48 00:32:29.940 [2024-07-25 23:38:27.514189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.940 [2024-07-25 23:38:27.514216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:29.940 [2024-07-25 23:38:27.524326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fe2e8 00:32:29.940 [2024-07-25 23:38:27.525396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.940 [2024-07-25 23:38:27.525439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:29.940 [2024-07-25 23:38:27.537101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f0bc0 00:32:29.940 [2024-07-25 23:38:27.538033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.940 [2024-07-25 23:38:27.538074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:29.940 [2024-07-25 23:38:27.551582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f7100 00:32:29.940 [2024-07-25 23:38:27.553514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.940 [2024-07-25 23:38:27.553541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:29.940 [2024-07-25 23:38:27.564794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190df550 00:32:29.940 [2024-07-25 23:38:27.566901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.940 [2024-07-25 23:38:27.566942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:29.940 [2024-07-25 23:38:27.573724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e6738 00:32:29.940 [2024-07-25 23:38:27.574647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.940 [2024-07-25 23:38:27.574674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:29.940 [2024-07-25 23:38:27.588192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e8088 00:32:29.940 [2024-07-25 23:38:27.589747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.940 [2024-07-25 23:38:27.589793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:29.940 [2024-07-25 23:38:27.599991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e4de8 00:32:29.940 [2024-07-25 23:38:27.601080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.940 [2024-07-25 23:38:27.601122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:29.940 [2024-07-25 23:38:27.611585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ddc00 00:32:29.940 [2024-07-25 23:38:27.612653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.940 [2024-07-25 23:38:27.612696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:29.940 [2024-07-25 23:38:27.624818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f57b0 00:32:29.940 [2024-07-25 23:38:27.626043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.940 [2024-07-25 23:38:27.626093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:29.940 [2024-07-25 23:38:27.637988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ef270 00:32:29.940 [2024-07-25 23:38:27.639372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.940 [2024-07-25 23:38:27.639415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:29.940 [2024-07-25 23:38:27.651148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e7818 00:32:29.940 [2024-07-25 23:38:27.652691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:29.940 [2024-07-25 23:38:27.652719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:29.940 [2024-07-25 23:38:27.662949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ddc00 00:32:30.199 [2024-07-25 23:38:27.664040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.199 [2024-07-25 23:38:27.664081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:30.199 [2024-07-25 23:38:27.675471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f31b8 00:32:30.199 [2024-07-25 23:38:27.676549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.676597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.688451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e4de8 00:32:30.200 [2024-07-25 23:38:27.689681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.689709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.700429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fc560 00:32:30.200 [2024-07-25 23:38:27.701653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.701694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.713705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fc128 00:32:30.200 [2024-07-25 23:38:27.715113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.715154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.725543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190eb328 00:32:30.200 [2024-07-25 23:38:27.726406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.726449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.738249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f8a50 00:32:30.200 [2024-07-25 23:38:27.738953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.738985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.751450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f8e88 00:32:30.200 [2024-07-25 23:38:27.752398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.752431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.765898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fb480 00:32:30.200 [2024-07-25 23:38:27.767801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.767843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.777751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f6890 00:32:30.200 [2024-07-25 23:38:27.779148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.779181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.789283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f5378 00:32:30.200 [2024-07-25 23:38:27.791187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.791216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.800094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ff3c8 00:32:30.200 [2024-07-25 23:38:27.800984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.801026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.814230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190feb58 00:32:30.200 [2024-07-25 23:38:27.815298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.815345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.827429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ec840 00:32:30.200 [2024-07-25 23:38:27.828692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.828748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.841896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f2948 00:32:30.200 [2024-07-25 23:38:27.843780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.843808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.853676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e4de8 00:32:30.200 [2024-07-25 23:38:27.855092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.855135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.865217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f1ca0 00:32:30.200 [2024-07-25 23:38:27.867152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.867181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.876015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fc560 00:32:30.200 [2024-07-25 23:38:27.876880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.876906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.890040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190eb760 00:32:30.200 [2024-07-25 23:38:27.891143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.891170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.903131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f81e0 00:32:30.200 [2024-07-25 23:38:27.904315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.904357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:30.200 [2024-07-25 23:38:27.915031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fe2e8 00:32:30.200 [2024-07-25 23:38:27.916254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.200 [2024-07-25 23:38:27.916296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:30.459 [2024-07-25 23:38:27.928368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fdeb0 00:32:30.459 [2024-07-25 23:38:27.929770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.459 [2024-07-25 23:38:27.929812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:30.459 [2024-07-25 23:38:27.940178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f57b0 00:32:30.459 [2024-07-25 23:38:27.941047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.459 [2024-07-25 23:38:27.941096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:30.459 [2024-07-25 23:38:27.952980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ec408 00:32:30.459 [2024-07-25 23:38:27.953694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.459 [2024-07-25 23:38:27.953724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:30.459 [2024-07-25 23:38:27.966222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190eaef0 00:32:30.459 [2024-07-25 23:38:27.967102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.459 [2024-07-25 23:38:27.967148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:30.459 [2024-07-25 23:38:27.980763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e5ec8 00:32:30.459 [2024-07-25 23:38:27.982654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.459 [2024-07-25 23:38:27.982681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:30.459 [2024-07-25 23:38:27.993965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e38d0 00:32:30.459 [2024-07-25 23:38:27.996056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.459 [2024-07-25 23:38:27.996104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:30.459 [2024-07-25 23:38:28.002957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fb480 00:32:30.459 [2024-07-25 23:38:28.003828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.459 [2024-07-25 23:38:28.003875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:30.459 [2024-07-25 23:38:28.015550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e23b8 00:32:30.459 [2024-07-25 23:38:28.016461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.459 [2024-07-25 23:38:28.016505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:30.459 [2024-07-25 23:38:28.028620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190eea00 00:32:30.459 [2024-07-25 23:38:28.029690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.459 [2024-07-25 23:38:28.029719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:30.459 [2024-07-25 23:38:28.040680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f8618 00:32:30.459 [2024-07-25 23:38:28.041740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.459 [2024-07-25 23:38:28.041767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:30.459 [2024-07-25 23:38:28.054882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e6b70 00:32:30.459 [2024-07-25 23:38:28.056154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.460 [2024-07-25 23:38:28.056201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:30.460 [2024-07-25 23:38:28.066727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190eaef0 00:32:30.460 [2024-07-25 23:38:28.067922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.460 [2024-07-25 23:38:28.067954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:30.460 [2024-07-25 23:38:28.080756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e5ec8 00:32:30.460 [2024-07-25 23:38:28.082197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.460 [2024-07-25 23:38:28.082245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:30.460 [2024-07-25 23:38:28.093791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f6458 00:32:30.460 [2024-07-25 23:38:28.095339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.460 [2024-07-25 23:38:28.095384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:30.460 [2024-07-25 23:38:28.104497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f1868 00:32:30.460 [2024-07-25 23:38:28.105255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.460 [2024-07-25 23:38:28.105284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:30.460 [2024-07-25 23:38:28.117714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ff3c8 00:32:30.460 [2024-07-25 23:38:28.118617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.460 [2024-07-25 23:38:28.118650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:30.460 [2024-07-25 23:38:28.132233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f9f68 00:32:30.460 [2024-07-25 23:38:28.134138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.460 [2024-07-25 23:38:28.134180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:30.460 [2024-07-25 23:38:28.144019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190dece0 00:32:30.460 [2024-07-25 23:38:28.145401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.460 [2024-07-25 23:38:28.145443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:30.460 [2024-07-25 23:38:28.155518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f96f8 00:32:30.460 [2024-07-25 23:38:28.157420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.460 [2024-07-25 23:38:28.157453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:30.460 [2024-07-25 23:38:28.167161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fb8b8 00:32:30.460 [2024-07-25 23:38:28.168066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.460 [2024-07-25 23:38:28.168111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:30.460 [2024-07-25 23:38:28.180234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e0630 00:32:30.460 [2024-07-25 23:38:28.181303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.460 [2024-07-25 23:38:28.181346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:30.717 [2024-07-25 23:38:28.192305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fb480 00:32:30.718 [2024-07-25 23:38:28.193342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.193383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.205555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ea248 00:32:30.718 [2024-07-25 23:38:28.206760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.206792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.219542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f0bc0 00:32:30.718 [2024-07-25 23:38:28.220957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.221002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.232564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fc998 00:32:30.718 [2024-07-25 23:38:28.234137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.234164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.243025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f6020 00:32:30.718 [2024-07-25 23:38:28.243894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.243937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.255971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f6458 00:32:30.718 [2024-07-25 23:38:28.256986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.257029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.269144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ed0b0 00:32:30.718 [2024-07-25 23:38:28.270329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.270372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.281094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f8a50 00:32:30.718 [2024-07-25 23:38:28.282293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.282336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.294337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e6b70 00:32:30.718 [2024-07-25 23:38:28.295710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.295753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.306170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f20d8 00:32:30.718 [2024-07-25 23:38:28.307025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.307074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.318918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f0ff8 00:32:30.718 [2024-07-25 23:38:28.319640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.319670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.332217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f8618 00:32:30.718 [2024-07-25 23:38:28.333104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.333140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.345476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190dece0 00:32:30.718 [2024-07-25 23:38:28.346573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.346604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.360030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e84c0 00:32:30.718 [2024-07-25 23:38:28.362096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.362139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.369010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fd640 00:32:30.718 [2024-07-25 23:38:28.369860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.369892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.380921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e5220 00:32:30.718 [2024-07-25 23:38:28.381776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.381819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.395055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f0788 00:32:30.718 [2024-07-25 23:38:28.396159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.396205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.408102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e95a0 00:32:30.718 [2024-07-25 23:38:28.409300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.409342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.420076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f8618 00:32:30.718 [2024-07-25 23:38:28.421268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.421310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:30.718 [2024-07-25 23:38:28.433315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fbcf0 00:32:30.718 [2024-07-25 23:38:28.434693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.718 [2024-07-25 23:38:28.434721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:30.976 [2024-07-25 23:38:28.446696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fef90 00:32:30.976 [2024-07-25 23:38:28.448258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.976 [2024-07-25 23:38:28.448302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:30.976 [2024-07-25 23:38:28.459932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f3a28 00:32:30.976 [2024-07-25 23:38:28.461636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.976 [2024-07-25 23:38:28.461665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:30.976 [2024-07-25 23:38:28.473160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190df550 00:32:30.976 [2024-07-25 23:38:28.475050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.976 [2024-07-25 23:38:28.475099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:30.976 [2024-07-25 23:38:28.485014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e0630 00:32:30.976 [2024-07-25 23:38:28.486393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.976 [2024-07-25 23:38:28.486435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:30.976 [2024-07-25 23:38:28.496528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190eff18 00:32:30.976 [2024-07-25 23:38:28.498443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.976 [2024-07-25 23:38:28.498476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:30.976 [2024-07-25 23:38:28.507363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190dfdc0 00:32:30.976 [2024-07-25 23:38:28.508249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.976 [2024-07-25 23:38:28.508291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:30.976 [2024-07-25 23:38:28.521592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190eee38 00:32:30.976 [2024-07-25 23:38:28.522649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.976 [2024-07-25 23:38:28.522696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:30.976 [2024-07-25 23:38:28.533423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fd640 00:32:30.976 [2024-07-25 23:38:28.534451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.976 [2024-07-25 23:38:28.534495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:30.976 [2024-07-25 23:38:28.547550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e73e0 00:32:30.976 [2024-07-25 23:38:28.548778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.976 [2024-07-25 23:38:28.548824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:30.976 [2024-07-25 23:38:28.560670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190eea00 00:32:30.976 [2024-07-25 23:38:28.562081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.976 [2024-07-25 23:38:28.562128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:30.976 [2024-07-25 23:38:28.572757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f8618 00:32:30.976 [2024-07-25 23:38:28.574155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.976 [2024-07-25 23:38:28.574183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:30.976 [2024-07-25 23:38:28.586184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fa7d8 00:32:30.976 [2024-07-25 23:38:28.587727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.976 [2024-07-25 23:38:28.587770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:30.976 [2024-07-25 23:38:28.598008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e1710 00:32:30.976 [2024-07-25 23:38:28.599075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.976 [2024-07-25 23:38:28.599120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:30.976 [2024-07-25 23:38:28.610924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f7da8 00:32:30.976 [2024-07-25 23:38:28.611803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.976 [2024-07-25 23:38:28.611836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:30.976 [2024-07-25 23:38:28.625536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190edd58 00:32:30.976 [2024-07-25 23:38:28.627444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.976 [2024-07-25 23:38:28.627488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:30.976 [2024-07-25 23:38:28.638831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f0bc0 00:32:30.976 [2024-07-25 23:38:28.640875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.977 [2024-07-25 23:38:28.640909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:30.977 [2024-07-25 23:38:28.647735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f9f68 00:32:30.977 [2024-07-25 23:38:28.648611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.977 [2024-07-25 23:38:28.648643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:30.977 [2024-07-25 23:38:28.660644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f6020 00:32:30.977 [2024-07-25 23:38:28.661518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.977 [2024-07-25 23:38:28.661571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:30.977 [2024-07-25 23:38:28.673675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190dece0 00:32:30.977 [2024-07-25 23:38:28.674725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.977 [2024-07-25 23:38:28.674758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:30.977 [2024-07-25 23:38:28.685690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e6fa8 00:32:30.977 [2024-07-25 23:38:28.686710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.977 [2024-07-25 23:38:28.686743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:30.977 [2024-07-25 23:38:28.698810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f7da8 00:32:30.977 [2024-07-25 23:38:28.700037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.977 [2024-07-25 23:38:28.700087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:31.235 [2024-07-25 23:38:28.712935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e38d0 00:32:31.235 [2024-07-25 23:38:28.714356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.235 [2024-07-25 23:38:28.714389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:31.235 [2024-07-25 23:38:28.724806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190eb760 00:32:31.235 [2024-07-25 23:38:28.726184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.235 [2024-07-25 23:38:28.726227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:31.235 [2024-07-25 23:38:28.736563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fac10 00:32:31.235 [2024-07-25 23:38:28.737393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.235 [2024-07-25 23:38:28.737421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:31.235 [2024-07-25 23:38:28.749306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e5ec8 00:32:31.235 [2024-07-25 23:38:28.750009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.235 [2024-07-25 23:38:28.750038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:31.235 [2024-07-25 23:38:28.763751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f46d0 00:32:31.235 [2024-07-25 23:38:28.765450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.235 [2024-07-25 23:38:28.765478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:31.235 [2024-07-25 23:38:28.775528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190eaab8 00:32:31.235 [2024-07-25 23:38:28.776736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.235 [2024-07-25 23:38:28.776778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:31.235 [2024-07-25 23:38:28.788391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f92c0 00:32:31.235 [2024-07-25 23:38:28.789449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.235 [2024-07-25 23:38:28.789481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:31.235 [2024-07-25 23:38:28.800373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f96f8 00:32:31.235 [2024-07-25 23:38:28.802261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.235 [2024-07-25 23:38:28.802290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:31.235 [2024-07-25 23:38:28.812055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ff3c8 00:32:31.235 [2024-07-25 23:38:28.812938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.235 [2024-07-25 23:38:28.812984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:31.235 [2024-07-25 23:38:28.826347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e0630 00:32:31.235 [2024-07-25 23:38:28.827907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.235 [2024-07-25 23:38:28.827934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:31.235 [2024-07-25 23:38:28.838293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f4b08 00:32:31.235 [2024-07-25 23:38:28.839337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.235 [2024-07-25 23:38:28.839383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:31.235 [2024-07-25 23:38:28.851215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f0788 00:32:31.235 [2024-07-25 23:38:28.852085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.236 [2024-07-25 23:38:28.852114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:31.236 [2024-07-25 23:38:28.864453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ec408 00:32:31.236 [2024-07-25 23:38:28.865493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.236 [2024-07-25 23:38:28.865526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:31.236 [2024-07-25 23:38:28.876440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f0350 00:32:31.236 [2024-07-25 23:38:28.878336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.236 [2024-07-25 23:38:28.878366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:31.236 [2024-07-25 23:38:28.888230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f81e0 00:32:31.236 [2024-07-25 23:38:28.889107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.236 [2024-07-25 23:38:28.889136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:31.236 [2024-07-25 23:38:28.901259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ef270 00:32:31.236 [2024-07-25 23:38:28.902289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.236 [2024-07-25 23:38:28.902332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:31.236 [2024-07-25 23:38:28.913230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190eb328 00:32:31.236 [2024-07-25 23:38:28.914221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.236 [2024-07-25 23:38:28.914249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:31.236 [2024-07-25 23:38:28.926538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e27f0 00:32:31.236 [2024-07-25 23:38:28.927722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.236 [2024-07-25 23:38:28.927755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:31.236 [2024-07-25 23:38:28.940597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190eea00 00:32:31.236 [2024-07-25 23:38:28.941993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.236 [2024-07-25 23:38:28.942040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:31.236 [2024-07-25 23:38:28.953674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e1b48 00:32:31.236 [2024-07-25 23:38:28.955226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.236 [2024-07-25 23:38:28.955268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:31.494 [2024-07-25 23:38:28.965719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e5a90 00:32:31.494 [2024-07-25 23:38:28.967259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.494 [2024-07-25 23:38:28.967302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:31.494 [2024-07-25 23:38:28.977559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e95a0 00:32:31.494 [2024-07-25 23:38:28.978570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.494 [2024-07-25 23:38:28.978612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:31.494 [2024-07-25 23:38:28.990397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e2c28 00:32:31.494 [2024-07-25 23:38:28.991277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.494 [2024-07-25 23:38:28.991311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:31.494 [2024-07-25 23:38:29.003613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ebb98 00:32:31.494 [2024-07-25 23:38:29.004663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.494 [2024-07-25 23:38:29.004693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:31.494 [2024-07-25 23:38:29.015586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f6cc8 00:32:31.494 [2024-07-25 23:38:29.017478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.494 [2024-07-25 23:38:29.017511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:31.494 [2024-07-25 23:38:29.027477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e6300 00:32:31.494 [2024-07-25 23:38:29.028362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.494 [2024-07-25 23:38:29.028416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:31.494 [2024-07-25 23:38:29.039412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e3d08 00:32:31.494 [2024-07-25 23:38:29.040273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.494 [2024-07-25 23:38:29.040316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:31.494 [2024-07-25 23:38:29.052685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ed0b0 00:32:31.495 [2024-07-25 23:38:29.053714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.495 [2024-07-25 23:38:29.053756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:31.495 [2024-07-25 23:38:29.066946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e1710 00:32:31.495 [2024-07-25 23:38:29.068211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.495 [2024-07-25 23:38:29.068241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:31.495 [2024-07-25 23:38:29.078851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e2c28 00:32:31.495 [2024-07-25 23:38:29.080048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.495 [2024-07-25 23:38:29.080098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:31.495 [2024-07-25 23:38:29.093050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fc560 00:32:31.495 [2024-07-25 23:38:29.094429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.495 [2024-07-25 23:38:29.094477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:31.495 [2024-07-25 23:38:29.106133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f5378 00:32:31.495 [2024-07-25 23:38:29.107642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.495 [2024-07-25 23:38:29.107677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:31.495 [2024-07-25 23:38:29.118107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e88f8 00:32:31.495 [2024-07-25 23:38:29.119622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.495 [2024-07-25 23:38:29.119664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:31.495 [2024-07-25 23:38:29.131325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f35f0 00:32:31.495 [2024-07-25 23:38:29.133020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.495 [2024-07-25 23:38:29.133054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:31.495 [2024-07-25 23:38:29.143155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190feb58 00:32:31.495 [2024-07-25 23:38:29.144325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.495 [2024-07-25 23:38:29.144367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:31.495 [2024-07-25 23:38:29.154712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190fdeb0 00:32:31.495 [2024-07-25 23:38:29.155870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.495 [2024-07-25 23:38:29.155903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:31.495 [2024-07-25 23:38:29.168870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e6b70 00:32:31.495 [2024-07-25 23:38:29.170274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.495 [2024-07-25 23:38:29.170321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:31.495 [2024-07-25 23:38:29.181956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190feb58 00:32:31.495 [2024-07-25 23:38:29.183508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.495 [2024-07-25 23:38:29.183541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:31.495 [2024-07-25 23:38:29.193928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190de8a8 00:32:31.495 [2024-07-25 23:38:29.195446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.495 [2024-07-25 23:38:29.195479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:31.495 [2024-07-25 23:38:29.205804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e9e10 00:32:31.495 [2024-07-25 23:38:29.206815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.495 [2024-07-25 23:38:29.206859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:31.495 [2024-07-25 23:38:29.218590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f1430 00:32:31.754 [2024-07-25 23:38:29.219444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.754 [2024-07-25 23:38:29.219473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:31.754 [2024-07-25 23:38:29.231789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f2d80 00:32:31.754 [2024-07-25 23:38:29.232825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.754 [2024-07-25 23:38:29.232858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:31.754 [2024-07-25 23:38:29.246225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190ea680 00:32:31.754 [2024-07-25 23:38:29.248263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.754 [2024-07-25 23:38:29.248306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:31.754 [2024-07-25 23:38:29.255229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e3060 00:32:31.754 [2024-07-25 23:38:29.256078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.754 [2024-07-25 23:38:29.256105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:31.754 [2024-07-25 23:38:29.269808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f0788 00:32:31.754 [2024-07-25 23:38:29.271309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.754 [2024-07-25 23:38:29.271353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:31.754 [2024-07-25 23:38:29.281946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f6020 00:32:31.754 [2024-07-25 23:38:29.283088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.754 [2024-07-25 23:38:29.283130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:31.754 [2024-07-25 23:38:29.296236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190e5a90 00:32:31.754 [2024-07-25 23:38:29.298073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.754 [2024-07-25 23:38:29.298117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:31.754 [2024-07-25 23:38:29.308086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2246940) with pdu=0x2000190f31b8 00:32:31.754 [2024-07-25 23:38:29.309329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.754 [2024-07-25 23:38:29.309359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:31.754 00:32:31.754 Latency(us) 00:32:31.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.754 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:31.754 nvme0n1 : 2.01 20168.00 78.78 0.00 0.00 6335.70 3228.25 15631.55 00:32:31.754 =================================================================================================================== 00:32:31.754 Total : 20168.00 78.78 0.00 0.00 6335.70 3228.25 15631.55 00:32:31.754 0 00:32:31.754 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:31.754 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:31.754 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:31.754 | .driver_specific 00:32:31.754 | .nvme_error 00:32:31.754 | .status_code 00:32:31.754 | .command_transient_transport_error' 00:32:31.754 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:32.012 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:32:32.012 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1529632 00:32:32.012 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1529632 ']' 00:32:32.012 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1529632 00:32:32.012 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:32.012 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:32.012 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1529632 00:32:32.012 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:32.012 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:32.013 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1529632' 00:32:32.013 killing process with pid 1529632 00:32:32.013 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1529632 00:32:32.013 Received shutdown signal, test time was about 2.000000 seconds 00:32:32.013 00:32:32.013 Latency(us) 00:32:32.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.013 =================================================================================================================== 00:32:32.013 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:32.013 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1529632 00:32:32.271 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:32.271 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:32.271 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:32.271 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:32.271 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:32.271 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1530152 00:32:32.271 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:32.271 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1530152 /var/tmp/bperf.sock 00:32:32.271 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1530152 ']' 00:32:32.271 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:32.271 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:32.271 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:32.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:32.272 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:32.272 23:38:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:32.272 [2024-07-25 23:38:29.863456] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:32.272 [2024-07-25 23:38:29.863543] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1530152 ] 00:32:32.272 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:32.272 Zero copy mechanism will not be used. 00:32:32.272 EAL: No free 2048 kB hugepages reported on node 1 00:32:32.272 [2024-07-25 23:38:29.895436] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:32.272 [2024-07-25 23:38:29.923395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.530 [2024-07-25 23:38:30.009977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.530 23:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:32.530 23:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:32.530 23:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:32.530 23:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:32.788 23:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:32.788 23:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.788 23:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:32.788 23:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.788 23:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:32.788 23:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:33.354 nvme0n1 00:32:33.354 23:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:33.354 23:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.354 23:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:33.354 23:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.354 23:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:33.354 23:38:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:33.354 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:33.354 Zero copy mechanism will not be used. 00:32:33.354 Running I/O for 2 seconds... 00:32:33.354 [2024-07-25 23:38:30.953778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.354 [2024-07-25 23:38:30.954190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.354 [2024-07-25 23:38:30.954229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.354 [2024-07-25 23:38:30.960917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.354 [2024-07-25 23:38:30.961286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.354 [2024-07-25 23:38:30.961318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.354 [2024-07-25 23:38:30.967876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.354 [2024-07-25 23:38:30.968258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.354 [2024-07-25 23:38:30.968290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.355 [2024-07-25 23:38:30.974934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.355 [2024-07-25 23:38:30.975270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.355 [2024-07-25 23:38:30.975300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.355 [2024-07-25 23:38:30.982926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.355 [2024-07-25 23:38:30.983277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.355 [2024-07-25 23:38:30.983322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.355 [2024-07-25 23:38:30.991663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.355 [2024-07-25 23:38:30.992005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.355 [2024-07-25 23:38:30.992039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.355 [2024-07-25 23:38:31.000331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.355 [2024-07-25 23:38:31.000696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.355 [2024-07-25 23:38:31.000730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.355 [2024-07-25 23:38:31.009100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.355 [2024-07-25 23:38:31.009457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.355 [2024-07-25 23:38:31.009492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.355 [2024-07-25 23:38:31.016573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.355 [2024-07-25 23:38:31.016916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.355 [2024-07-25 23:38:31.016951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.355 [2024-07-25 23:38:31.024482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.355 [2024-07-25 23:38:31.024858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.355 [2024-07-25 23:38:31.024892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.355 [2024-07-25 23:38:31.031480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.355 [2024-07-25 23:38:31.031823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.355 [2024-07-25 23:38:31.031856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.355 [2024-07-25 23:38:31.038335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.355 [2024-07-25 23:38:31.038711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.355 [2024-07-25 23:38:31.038745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.355 [2024-07-25 23:38:31.045567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.355 [2024-07-25 23:38:31.045910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.355 [2024-07-25 23:38:31.045945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.355 [2024-07-25 23:38:31.052926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.355 [2024-07-25 23:38:31.053267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.355 [2024-07-25 23:38:31.053299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.355 [2024-07-25 23:38:31.060824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.355 [2024-07-25 23:38:31.061185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.355 [2024-07-25 23:38:31.061215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.355 [2024-07-25 23:38:31.068727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.355 [2024-07-25 23:38:31.069079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.355 [2024-07-25 23:38:31.069134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.355 [2024-07-25 23:38:31.077623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.355 [2024-07-25 23:38:31.077980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.355 [2024-07-25 23:38:31.078014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.614 [2024-07-25 23:38:31.086011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.614 [2024-07-25 23:38:31.086366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.614 [2024-07-25 23:38:31.086401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.614 [2024-07-25 23:38:31.093072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.614 [2024-07-25 23:38:31.093448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.614 [2024-07-25 23:38:31.093480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.614 [2024-07-25 23:38:31.100689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.614 [2024-07-25 23:38:31.100899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.614 [2024-07-25 23:38:31.100931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.614 [2024-07-25 23:38:31.109422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.614 [2024-07-25 23:38:31.109809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.614 [2024-07-25 23:38:31.109841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.614 [2024-07-25 23:38:31.117563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.614 [2024-07-25 23:38:31.118014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.614 [2024-07-25 23:38:31.118048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.125024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.125366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.125395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.132924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.133352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.133385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.140879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.141206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.141238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.147947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.148352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.148399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.155716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.156048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.156085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.163515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.163916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.163949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.171812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.172148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.172178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.179304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.179667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.179700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.187690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.188046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.188108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.195759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.196105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.196151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.203051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.203404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.203437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.209099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.209564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.209595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.216121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.216510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.216547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.224165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.224476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.224508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.231203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.231505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.231538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.237662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.237959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.237989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.244237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.244535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.244564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.250682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.250976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.251005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.256644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.256926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.256970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.262979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.263305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.263350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.269269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.269578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.615 [2024-07-25 23:38:31.269613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.615 [2024-07-25 23:38:31.275129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.615 [2024-07-25 23:38:31.275429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.616 [2024-07-25 23:38:31.275475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.616 [2024-07-25 23:38:31.281757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.616 [2024-07-25 23:38:31.282087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.616 [2024-07-25 23:38:31.282140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.616 [2024-07-25 23:38:31.287874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.616 [2024-07-25 23:38:31.288198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.616 [2024-07-25 23:38:31.288229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.616 [2024-07-25 23:38:31.293599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.616 [2024-07-25 23:38:31.293900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.616 [2024-07-25 23:38:31.293929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.616 [2024-07-25 23:38:31.299302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.616 [2024-07-25 23:38:31.299626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.616 [2024-07-25 23:38:31.299656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.616 [2024-07-25 23:38:31.305336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.616 [2024-07-25 23:38:31.305634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.616 [2024-07-25 23:38:31.305668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.616 [2024-07-25 23:38:31.311796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.616 [2024-07-25 23:38:31.312144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.616 [2024-07-25 23:38:31.312175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.616 [2024-07-25 23:38:31.318644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.616 [2024-07-25 23:38:31.318936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.616 [2024-07-25 23:38:31.318984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.616 [2024-07-25 23:38:31.325163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.616 [2024-07-25 23:38:31.325468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.616 [2024-07-25 23:38:31.325512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.616 [2024-07-25 23:38:31.331801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.616 [2024-07-25 23:38:31.332233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.616 [2024-07-25 23:38:31.332265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.340323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.340772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.875 [2024-07-25 23:38:31.340817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.348717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.349098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.875 [2024-07-25 23:38:31.349146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.357166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.357514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.875 [2024-07-25 23:38:31.357544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.365471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.365869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.875 [2024-07-25 23:38:31.365898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.372623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.372972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.875 [2024-07-25 23:38:31.373003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.379824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.380127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.875 [2024-07-25 23:38:31.380162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.387324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.387681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.875 [2024-07-25 23:38:31.387711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.394927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.395157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.875 [2024-07-25 23:38:31.395191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.402000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.402345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.875 [2024-07-25 23:38:31.402374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.409699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.410086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.875 [2024-07-25 23:38:31.410134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.417471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.417863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.875 [2024-07-25 23:38:31.417895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.424427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.424722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.875 [2024-07-25 23:38:31.424752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.431093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.431459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.875 [2024-07-25 23:38:31.431490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.438021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.438389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.875 [2024-07-25 23:38:31.438418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.444741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.445105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.875 [2024-07-25 23:38:31.445134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.451341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.451642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.875 [2024-07-25 23:38:31.451672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.458043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.458404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.875 [2024-07-25 23:38:31.458448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.875 [2024-07-25 23:38:31.464980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.875 [2024-07-25 23:38:31.465286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.465318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.471384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.471705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.471742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.477493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.477785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.477830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.483821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.484168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.484198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.489857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.490164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.490198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.495778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.496126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.496157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.502482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.502845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.502875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.509533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.509830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.509876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.516943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.517310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.517342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.524241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.524529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.524559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.531072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.531365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.531395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.537788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.538088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.538134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.544197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.544494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.544524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.550686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.550962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.550991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.557726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.557998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.558028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.564045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.564371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.564417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.570818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.571108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.571166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.577580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.577869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.577913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.583718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.584025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.584055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.589825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.590128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.590160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.876 [2024-07-25 23:38:31.596654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:33.876 [2024-07-25 23:38:31.596956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.876 [2024-07-25 23:38:31.596987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.603312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.603606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.603652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.609898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.610204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.610235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.615649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.615951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.615981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.621871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.622185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.622216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.627664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.627975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.628005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.633269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.633571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.633600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.638953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.639264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.639294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.644805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.645117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.645162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.650856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.651203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.651233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.657358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.657654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.657683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.663752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.664033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.664084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.670461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.670767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.670797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.676696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.676981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.677026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.682388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.682671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.682701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.688751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.689053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.689108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.696343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.696729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.696773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.704660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.705015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.705045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.713005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.713346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.713392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.136 [2024-07-25 23:38:31.720743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.136 [2024-07-25 23:38:31.721027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.136 [2024-07-25 23:38:31.721057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.137 [2024-07-25 23:38:31.728784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.137 [2024-07-25 23:38:31.729141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.137 [2024-07-25 23:38:31.729174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.137 [2024-07-25 23:38:31.737093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.137 [2024-07-25 23:38:31.737492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.137 [2024-07-25 23:38:31.737536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.137 [2024-07-25 23:38:31.745640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.137 [2024-07-25 23:38:31.745978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.137 [2024-07-25 23:38:31.746032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.137 [2024-07-25 23:38:31.754098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.137 [2024-07-25 23:38:31.754511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.137 [2024-07-25 23:38:31.754541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.137 [2024-07-25 23:38:31.762568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.137 [2024-07-25 23:38:31.762959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.137 [2024-07-25 23:38:31.762990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.137 [2024-07-25 23:38:31.770931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.137 [2024-07-25 23:38:31.771317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.137 [2024-07-25 23:38:31.771347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.137 [2024-07-25 23:38:31.779529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.137 [2024-07-25 23:38:31.779867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.137 [2024-07-25 23:38:31.779910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.137 [2024-07-25 23:38:31.787852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.137 [2024-07-25 23:38:31.788238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.137 [2024-07-25 23:38:31.788270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.137 [2024-07-25 23:38:31.796200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.137 [2024-07-25 23:38:31.796600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.137 [2024-07-25 23:38:31.796628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.137 [2024-07-25 23:38:31.804544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.137 [2024-07-25 23:38:31.804921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.137 [2024-07-25 23:38:31.804951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.137 [2024-07-25 23:38:31.812775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.137 [2024-07-25 23:38:31.813187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.137 [2024-07-25 23:38:31.813219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.137 [2024-07-25 23:38:31.821236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.137 [2024-07-25 23:38:31.821598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.137 [2024-07-25 23:38:31.821628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.137 [2024-07-25 23:38:31.829421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.137 [2024-07-25 23:38:31.829808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.137 [2024-07-25 23:38:31.829838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.137 [2024-07-25 23:38:31.837833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.137 [2024-07-25 23:38:31.838234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.137 [2024-07-25 23:38:31.838270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.137 [2024-07-25 23:38:31.846522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.137 [2024-07-25 23:38:31.846924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.137 [2024-07-25 23:38:31.846956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.137 [2024-07-25 23:38:31.854348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.137 [2024-07-25 23:38:31.854726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.137 [2024-07-25 23:38:31.854756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.860901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.861250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.861281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.868558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.868894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.868926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.876208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.876621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.876652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.883904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.884283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.884327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.892030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.892430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.892461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.899978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.900293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.900324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.907833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.908213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.908250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.915543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.915901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.915945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.923768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.924142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.924172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.930792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.931081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.931111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.936216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.936526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.936555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.942209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.942493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.942522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.947610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.947885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.947915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.953205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.953500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.953529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.959400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.959668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.959712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.965435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.965693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.965722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.971182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.971458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.971487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.977088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.977352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.977398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.982988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.983272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.397 [2024-07-25 23:38:31.983303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.397 [2024-07-25 23:38:31.988890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.397 [2024-07-25 23:38:31.989178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:31.989209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:31.994991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:31.995268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:31.995299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.001117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.001377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.001422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.007149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.007433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.007462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.013029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.013327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.013358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.019173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.019449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.019478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.025142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.025422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.025451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.031226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.031527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.031558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.037361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.037666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.037694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.043526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.043788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.043818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.049569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.049828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.049863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.055497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.055769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.055798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.061688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.061964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.061994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.067923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.068191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.068222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.074054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.074328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.074385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.080408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.080668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.080699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.086283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.086557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.086587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.092026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.092312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.092343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.097991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.098290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.098326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.104130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.104413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.104444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.110518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.110804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.110835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.398 [2024-07-25 23:38:32.116594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.398 [2024-07-25 23:38:32.116934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.398 [2024-07-25 23:38:32.116964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.657 [2024-07-25 23:38:32.123066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.123347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.123393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.129106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.129389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.129423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.135448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.135710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.135741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.141645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.141897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.141926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.147862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.148154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.148191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.154454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.154740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.154771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.161526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.161781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.161810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.168787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.169193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.169223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.175786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.176088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.176134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.183189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.183490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.183524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.190448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.190862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.190893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.198150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.198473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.198503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.205183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.205565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.205593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.213135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.213504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.213533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.219780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.220032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.220096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.226243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.226529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.226562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.232969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.233252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.233290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.239221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.239497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.239528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.244881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.245164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.245195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.251107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.251382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.251413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.256990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.257292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.257323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.658 [2024-07-25 23:38:32.263374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.658 [2024-07-25 23:38:32.263647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.658 [2024-07-25 23:38:32.263677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.269722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.269990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.270024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.275663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.275927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.275963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.281612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.281864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.281911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.287827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.288107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.288136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.293847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.294153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.294183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.299939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.300221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.300251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.305732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.305988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.306018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.311958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.312243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.312274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.317912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.318194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.318225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.323963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.324248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.324287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.330168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.330469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.330500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.336086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.336374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.336404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.342057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.342383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.342412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.348528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.348846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.348874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.355392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.355699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.355742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.363018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.363439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.363469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.371190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.371489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.371518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.659 [2024-07-25 23:38:32.378304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.659 [2024-07-25 23:38:32.378648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.659 [2024-07-25 23:38:32.378678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.386144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.386459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.386493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.394029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.394377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.394407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.402138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.402483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.402528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.410196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.410555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.410586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.417330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.417603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.417633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.424594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.424969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.425000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.431314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.431701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.431744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.437754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.438070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.438125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.444107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.444478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.444506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.450539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.450842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.450873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.457464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.457809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.457839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.463608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.463918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.463948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.469412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.469693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.469723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.475204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.475498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.475528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.481134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.481476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.481506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.486941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.487221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.487253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.493487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.493826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.493871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.500218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.500612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.500648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.506988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.507292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.507323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.512907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.513219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.513250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.919 [2024-07-25 23:38:32.518556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.919 [2024-07-25 23:38:32.518814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.919 [2024-07-25 23:38:32.518844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.524155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.524441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.524471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.530065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.530348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.530394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.535890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.536205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.536236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.541865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.542146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.542177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.547702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.547961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.547990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.553988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.554283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.554314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.560835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.561145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.561177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.567431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.567775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.567804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.573585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.573892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.573921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.579848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.580216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.580246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.586534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.586932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.586962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.593578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.593886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.593930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.599258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.599532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.599562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.605413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.605695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.605725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.611747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.612007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.612037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.617882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.618163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.618201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.625191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.625611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.625640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.633104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.633508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.633537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:34.920 [2024-07-25 23:38:32.640487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:34.920 [2024-07-25 23:38:32.640797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.920 [2024-07-25 23:38:32.640828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.179 [2024-07-25 23:38:32.647010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.179 [2024-07-25 23:38:32.647282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.647314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.653342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.653714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.653744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.659933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.660228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.660259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.666002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.666295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.666348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.671645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.671916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.671945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.677670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.677942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.677972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.684012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.684337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.684386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.689645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.689905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.689934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.695410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.695671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.695715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.700957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.701245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.701277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.706263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.706538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.706568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.711598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.711859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.711889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.717270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.717553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.717583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.723028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.723326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.723356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.729250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.729520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.729550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.735023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.735332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.735362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.741155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.741422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.741468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.747138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.747421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.747452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.753415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.753700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.753728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.759702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.759960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.760004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.766005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.766303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.766341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.772020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.772331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.180 [2024-07-25 23:38:32.772384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.180 [2024-07-25 23:38:32.778132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.180 [2024-07-25 23:38:32.778433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.778464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.181 [2024-07-25 23:38:32.784309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.181 [2024-07-25 23:38:32.784582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.784612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.181 [2024-07-25 23:38:32.790465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.181 [2024-07-25 23:38:32.790723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.790753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.181 [2024-07-25 23:38:32.796474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.181 [2024-07-25 23:38:32.796769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.796815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.181 [2024-07-25 23:38:32.802443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.181 [2024-07-25 23:38:32.802698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.802727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.181 [2024-07-25 23:38:32.808481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.181 [2024-07-25 23:38:32.808738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.808767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.181 [2024-07-25 23:38:32.814567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.181 [2024-07-25 23:38:32.814840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.814870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.181 [2024-07-25 23:38:32.821678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.181 [2024-07-25 23:38:32.822112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.822158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.181 [2024-07-25 23:38:32.829462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.181 [2024-07-25 23:38:32.829777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.829806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.181 [2024-07-25 23:38:32.836261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.181 [2024-07-25 23:38:32.836557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.836587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.181 [2024-07-25 23:38:32.843772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.181 [2024-07-25 23:38:32.844037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.844094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.181 [2024-07-25 23:38:32.851422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.181 [2024-07-25 23:38:32.851798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.851831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.181 [2024-07-25 23:38:32.858289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.181 [2024-07-25 23:38:32.858608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.858638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.181 [2024-07-25 23:38:32.866053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.181 [2024-07-25 23:38:32.866461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.866491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.181 [2024-07-25 23:38:32.873796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.181 [2024-07-25 23:38:32.874175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.874220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.181 [2024-07-25 23:38:32.881694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.181 [2024-07-25 23:38:32.882008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.882042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.181 [2024-07-25 23:38:32.889228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.181 [2024-07-25 23:38:32.889620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.889649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.181 [2024-07-25 23:38:32.897266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.181 [2024-07-25 23:38:32.897646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.181 [2024-07-25 23:38:32.897676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.440 [2024-07-25 23:38:32.904109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.440 [2024-07-25 23:38:32.904416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.440 [2024-07-25 23:38:32.904469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.440 [2024-07-25 23:38:32.909847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.440 [2024-07-25 23:38:32.910150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.440 [2024-07-25 23:38:32.910181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.440 [2024-07-25 23:38:32.915727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.440 [2024-07-25 23:38:32.915986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.440 [2024-07-25 23:38:32.916016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.440 [2024-07-25 23:38:32.921347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.440 [2024-07-25 23:38:32.921621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.440 [2024-07-25 23:38:32.921650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.440 [2024-07-25 23:38:32.927661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.440 [2024-07-25 23:38:32.927935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.440 [2024-07-25 23:38:32.927980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.440 [2024-07-25 23:38:32.933911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.440 [2024-07-25 23:38:32.934291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.440 [2024-07-25 23:38:32.934336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.440 [2024-07-25 23:38:32.939946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.440 [2024-07-25 23:38:32.940230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.440 [2024-07-25 23:38:32.940267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.440 [2024-07-25 23:38:32.945728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22485c0) with pdu=0x2000190fef90 00:32:35.440 [2024-07-25 23:38:32.946023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.440 [2024-07-25 23:38:32.946052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.440 00:32:35.440 Latency(us) 00:32:35.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.440 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:35.440 nvme0n1 : 2.00 4576.01 572.00 0.00 0.00 3488.40 2451.53 9272.13 00:32:35.440 =================================================================================================================== 00:32:35.440 Total : 4576.01 572.00 0.00 0.00 3488.40 2451.53 9272.13 00:32:35.440 0 00:32:35.440 23:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:35.440 23:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:35.440 23:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:35.440 | .driver_specific 00:32:35.440 | .nvme_error 00:32:35.440 | .status_code 00:32:35.440 | .command_transient_transport_error' 00:32:35.440 23:38:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:35.699 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 295 > 0 )) 00:32:35.699 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1530152 00:32:35.699 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1530152 ']' 00:32:35.699 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1530152 00:32:35.699 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:35.699 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:35.699 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1530152 00:32:35.699 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:35.699 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:35.699 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1530152' 00:32:35.699 killing process with pid 1530152 00:32:35.699 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1530152 00:32:35.699 Received shutdown signal, test time was about 2.000000 seconds 00:32:35.699 00:32:35.699 Latency(us) 00:32:35.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.699 =================================================================================================================== 00:32:35.699 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:35.699 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1530152 00:32:35.956 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1528790 00:32:35.956 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1528790 ']' 00:32:35.956 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1528790 00:32:35.956 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:35.956 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:35.956 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1528790 00:32:35.956 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:35.956 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:35.956 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1528790' 00:32:35.956 killing process with pid 1528790 00:32:35.956 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1528790 00:32:35.956 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1528790 00:32:36.215 00:32:36.215 real 0m15.007s 00:32:36.215 user 0m29.751s 00:32:36.215 sys 0m4.072s 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:36.215 ************************************ 00:32:36.215 END TEST nvmf_digest_error 00:32:36.215 ************************************ 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:36.215 rmmod nvme_tcp 00:32:36.215 rmmod nvme_fabrics 00:32:36.215 rmmod nvme_keyring 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1528790 ']' 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1528790 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1528790 ']' 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1528790 00:32:36.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1528790) - No such process 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1528790 is not found' 00:32:36.215 Process with pid 1528790 is not found 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:36.215 23:38:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:38.746 00:32:38.746 real 0m34.420s 00:32:38.746 user 1m0.456s 00:32:38.746 sys 0m9.751s 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:38.746 ************************************ 00:32:38.746 END TEST nvmf_digest 00:32:38.746 ************************************ 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.746 ************************************ 00:32:38.746 START TEST nvmf_bdevperf 00:32:38.746 ************************************ 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:38.746 * Looking for test storage... 00:32:38.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:38.746 23:38:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:38.746 23:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:38.746 23:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:38.746 23:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:38.746 23:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:38.746 23:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.746 23:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:38.746 23:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:38.746 23:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:38.746 23:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.746 23:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.746 23:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.746 23:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:38.746 23:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:38.746 23:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:32:38.746 23:38:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:40.646 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:40.647 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:40.647 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:40.647 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:40.647 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:40.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:40.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:32:40.647 00:32:40.647 --- 10.0.0.2 ping statistics --- 00:32:40.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.647 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:40.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:40.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:32:40.647 00:32:40.647 --- 10.0.0.1 ping statistics --- 00:32:40.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.647 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1532501 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:40.647 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1532501 00:32:40.648 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1532501 ']' 00:32:40.648 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:40.648 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:40.648 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:40.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:40.648 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:40.648 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:40.648 [2024-07-25 23:38:38.271225] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:40.648 [2024-07-25 23:38:38.271311] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:40.648 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.648 [2024-07-25 23:38:38.308938] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:40.648 [2024-07-25 23:38:38.340940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:40.906 [2024-07-25 23:38:38.432179] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:40.906 [2024-07-25 23:38:38.432243] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:40.906 [2024-07-25 23:38:38.432269] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:40.906 [2024-07-25 23:38:38.432283] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:40.906 [2024-07-25 23:38:38.432295] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:40.906 [2024-07-25 23:38:38.432394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:40.906 [2024-07-25 23:38:38.432442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:40.906 [2024-07-25 23:38:38.432444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:40.906 [2024-07-25 23:38:38.577213] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:40.906 Malloc0 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.906 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:41.164 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.164 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:41.164 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.164 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:41.164 [2024-07-25 23:38:38.641348] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:41.164 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.164 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:41.164 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:41.164 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:41.164 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:41.164 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:41.164 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:41.164 { 00:32:41.164 "params": { 00:32:41.164 "name": "Nvme$subsystem", 00:32:41.164 "trtype": "$TEST_TRANSPORT", 00:32:41.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:41.164 "adrfam": "ipv4", 00:32:41.164 "trsvcid": "$NVMF_PORT", 00:32:41.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:41.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:41.164 "hdgst": ${hdgst:-false}, 00:32:41.164 "ddgst": ${ddgst:-false} 00:32:41.164 }, 00:32:41.164 "method": "bdev_nvme_attach_controller" 00:32:41.164 } 00:32:41.164 EOF 00:32:41.164 )") 00:32:41.164 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:41.164 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:41.164 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:41.164 23:38:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:41.164 "params": { 00:32:41.164 "name": "Nvme1", 00:32:41.164 "trtype": "tcp", 00:32:41.164 "traddr": "10.0.0.2", 00:32:41.164 "adrfam": "ipv4", 00:32:41.164 "trsvcid": "4420", 00:32:41.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:41.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:41.164 "hdgst": false, 00:32:41.164 "ddgst": false 00:32:41.164 }, 00:32:41.164 "method": "bdev_nvme_attach_controller" 00:32:41.164 }' 00:32:41.164 [2024-07-25 23:38:38.691176] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:41.164 [2024-07-25 23:38:38.691246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1532529 ] 00:32:41.164 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.164 [2024-07-25 23:38:38.723925] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:41.164 [2024-07-25 23:38:38.753304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.164 [2024-07-25 23:38:38.849291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.422 Running I/O for 1 seconds... 00:32:42.795 00:32:42.795 Latency(us) 00:32:42.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.795 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:42.795 Verification LBA range: start 0x0 length 0x4000 00:32:42.795 Nvme1n1 : 1.01 8684.88 33.93 0.00 0.00 14677.33 916.29 16311.18 00:32:42.795 =================================================================================================================== 00:32:42.795 Total : 8684.88 33.93 0.00 0.00 14677.33 916.29 16311.18 00:32:42.795 23:38:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1532785 00:32:42.795 23:38:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:32:42.795 23:38:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:42.795 23:38:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:42.795 23:38:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:42.795 23:38:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:42.795 23:38:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:42.795 23:38:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:42.795 { 00:32:42.795 "params": { 00:32:42.795 "name": "Nvme$subsystem", 00:32:42.795 "trtype": "$TEST_TRANSPORT", 00:32:42.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:42.795 "adrfam": "ipv4", 00:32:42.795 "trsvcid": "$NVMF_PORT", 00:32:42.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:42.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:42.795 "hdgst": ${hdgst:-false}, 00:32:42.795 "ddgst": ${ddgst:-false} 00:32:42.795 }, 00:32:42.795 "method": "bdev_nvme_attach_controller" 00:32:42.795 } 00:32:42.795 EOF 00:32:42.795 )") 00:32:42.795 23:38:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:42.795 23:38:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:42.795 23:38:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:42.795 23:38:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:42.795 "params": { 00:32:42.795 "name": "Nvme1", 00:32:42.795 "trtype": "tcp", 00:32:42.795 "traddr": "10.0.0.2", 00:32:42.795 "adrfam": "ipv4", 00:32:42.795 "trsvcid": "4420", 00:32:42.795 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:42.795 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:42.795 "hdgst": false, 00:32:42.795 "ddgst": false 00:32:42.795 }, 00:32:42.795 "method": "bdev_nvme_attach_controller" 00:32:42.795 }' 00:32:42.795 [2024-07-25 23:38:40.425835] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:42.795 [2024-07-25 23:38:40.425929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1532785 ] 00:32:42.795 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.795 [2024-07-25 23:38:40.458301] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:42.795 [2024-07-25 23:38:40.486804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.053 [2024-07-25 23:38:40.575271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.310 Running I/O for 15 seconds... 00:32:45.840 23:38:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1532501 00:32:45.840 23:38:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:32:45.840 [2024-07-25 23:38:43.397083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.840 [2024-07-25 23:38:43.397149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.840 [2024-07-25 23:38:43.397197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.840 [2024-07-25 23:38:43.397233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.840 [2024-07-25 23:38:43.397266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.840 [2024-07-25 23:38:43.397310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.840 [2024-07-25 23:38:43.397344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.840 [2024-07-25 23:38:43.397398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.840 [2024-07-25 23:38:43.397438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.840 [2024-07-25 23:38:43.397473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.840 [2024-07-25 23:38:43.397508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.840 [2024-07-25 23:38:43.397546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.840 [2024-07-25 23:38:43.397581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.840 [2024-07-25 23:38:43.397619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.840 [2024-07-25 23:38:43.397654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.840 [2024-07-25 23:38:43.397687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.840 [2024-07-25 23:38:43.397722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.840 [2024-07-25 23:38:43.397756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.840 [2024-07-25 23:38:43.397794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.840 [2024-07-25 23:38:43.397831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.840 [2024-07-25 23:38:43.397865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.840 [2024-07-25 23:38:43.397900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.840 [2024-07-25 23:38:43.397935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.840 [2024-07-25 23:38:43.397969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.397987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.840 [2024-07-25 23:38:43.398003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.398020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.840 [2024-07-25 23:38:43.398036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.398054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.840 [2024-07-25 23:38:43.398082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.398118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.840 [2024-07-25 23:38:43.398134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.398150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.840 [2024-07-25 23:38:43.398164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.398180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.840 [2024-07-25 23:38:43.398194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.840 [2024-07-25 23:38:43.398211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.840 [2024-07-25 23:38:43.398229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.398980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.398998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.399013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.399031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.399046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.399071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.399089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.399127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.399142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.399157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.399171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.399187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.399201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.399217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.399231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.399247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.399261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.399278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.399292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.399308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.841 [2024-07-25 23:38:43.399322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.841 [2024-07-25 23:38:43.399337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.399966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.399982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.400000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.400016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.400034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.400050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.400075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.400107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.400124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.400139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.400155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.400170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.400187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.400202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.400217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.400232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.400249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.400263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.400279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.400293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.400309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.400323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.400354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.400368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.400383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.400415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.400435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.400450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.842 [2024-07-25 23:38:43.400468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.842 [2024-07-25 23:38:43.400484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.400501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.400517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.400535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.400551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.400568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.400584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.400602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.400619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.400637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.400653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.400671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.400687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.400705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.400721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.400739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.843 [2024-07-25 23:38:43.400755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.400772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.843 [2024-07-25 23:38:43.400788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.400806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.400822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.400839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.400859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.400878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.400894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.400912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.400928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.400945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.400961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.400979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.400995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.401013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.401028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.401046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.401068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.401088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.401104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.401139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.401154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.401170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.401184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.401200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.401214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.401230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.401244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.401260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.401275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.401295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.401310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.401327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.401341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.401378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.401394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.401413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.401429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.401446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.401462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.401480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.401497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.401515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.401531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.401549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.401565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.843 [2024-07-25 23:38:43.401582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.843 [2024-07-25 23:38:43.401599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.844 [2024-07-25 23:38:43.401615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21afe60 is same with the state(5) to be set 00:32:45.844 [2024-07-25 23:38:43.401635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:45.844 [2024-07-25 23:38:43.401649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:45.844 [2024-07-25 23:38:43.401663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48728 len:8 PRP1 0x0 PRP2 0x0 00:32:45.844 [2024-07-25 23:38:43.401677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.844 [2024-07-25 23:38:43.401744] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21afe60 was disconnected and freed. reset controller. 00:32:45.844 [2024-07-25 23:38:43.405625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.844 [2024-07-25 23:38:43.405702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:45.844 [2024-07-25 23:38:43.406371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.844 [2024-07-25 23:38:43.406406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:45.844 [2024-07-25 23:38:43.406426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:45.844 [2024-07-25 23:38:43.406668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:45.844 [2024-07-25 23:38:43.406914] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.844 [2024-07-25 23:38:43.406940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.844 [2024-07-25 23:38:43.406960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.844 [2024-07-25 23:38:43.410608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.844 [2024-07-25 23:38:43.419888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.844 [2024-07-25 23:38:43.420281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.844 [2024-07-25 23:38:43.420312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:45.844 [2024-07-25 23:38:43.420330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:45.844 [2024-07-25 23:38:43.420588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:45.844 [2024-07-25 23:38:43.420833] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.844 [2024-07-25 23:38:43.420858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.844 [2024-07-25 23:38:43.420874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.844 [2024-07-25 23:38:43.424501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.844 [2024-07-25 23:38:43.433828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.844 [2024-07-25 23:38:43.434222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.844 [2024-07-25 23:38:43.434252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:45.844 [2024-07-25 23:38:43.434269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:45.844 [2024-07-25 23:38:43.434519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:45.844 [2024-07-25 23:38:43.434764] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.844 [2024-07-25 23:38:43.434789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.844 [2024-07-25 23:38:43.434805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.844 [2024-07-25 23:38:43.438449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.844 [2024-07-25 23:38:43.447762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.844 [2024-07-25 23:38:43.448151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.844 [2024-07-25 23:38:43.448181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:45.844 [2024-07-25 23:38:43.448198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:45.844 [2024-07-25 23:38:43.448440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:45.844 [2024-07-25 23:38:43.448686] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.844 [2024-07-25 23:38:43.448711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.844 [2024-07-25 23:38:43.448727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.844 [2024-07-25 23:38:43.452318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.844 [2024-07-25 23:38:43.461518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.844 [2024-07-25 23:38:43.461925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.844 [2024-07-25 23:38:43.461957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:45.844 [2024-07-25 23:38:43.461975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:45.844 [2024-07-25 23:38:43.462228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:45.844 [2024-07-25 23:38:43.462481] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.844 [2024-07-25 23:38:43.462506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.844 [2024-07-25 23:38:43.462523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.844 [2024-07-25 23:38:43.466138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.844 [2024-07-25 23:38:43.475491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.844 [2024-07-25 23:38:43.475920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.844 [2024-07-25 23:38:43.475952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:45.844 [2024-07-25 23:38:43.475970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:45.844 [2024-07-25 23:38:43.476227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:45.844 [2024-07-25 23:38:43.476474] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.844 [2024-07-25 23:38:43.476500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.844 [2024-07-25 23:38:43.476516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.844 [2024-07-25 23:38:43.480151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.844 [2024-07-25 23:38:43.489392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.844 [2024-07-25 23:38:43.489818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.844 [2024-07-25 23:38:43.489847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:45.844 [2024-07-25 23:38:43.489881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:45.844 [2024-07-25 23:38:43.490145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:45.844 [2024-07-25 23:38:43.490383] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.844 [2024-07-25 23:38:43.490408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.844 [2024-07-25 23:38:43.490425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.844 [2024-07-25 23:38:43.494052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.844 [2024-07-25 23:38:43.503406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.844 [2024-07-25 23:38:43.503814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.844 [2024-07-25 23:38:43.503864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:45.844 [2024-07-25 23:38:43.503883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:45.845 [2024-07-25 23:38:43.504150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:45.845 [2024-07-25 23:38:43.504390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.845 [2024-07-25 23:38:43.504415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.845 [2024-07-25 23:38:43.504432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.845 [2024-07-25 23:38:43.507877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.845 [2024-07-25 23:38:43.517492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.845 [2024-07-25 23:38:43.517904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.845 [2024-07-25 23:38:43.517936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:45.845 [2024-07-25 23:38:43.517954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:45.845 [2024-07-25 23:38:43.518210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:45.845 [2024-07-25 23:38:43.518463] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.845 [2024-07-25 23:38:43.518487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.845 [2024-07-25 23:38:43.518503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.845 [2024-07-25 23:38:43.522096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.845 [2024-07-25 23:38:43.531388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.845 [2024-07-25 23:38:43.531811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.845 [2024-07-25 23:38:43.531842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:45.845 [2024-07-25 23:38:43.531860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:45.845 [2024-07-25 23:38:43.532113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:45.845 [2024-07-25 23:38:43.532358] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.845 [2024-07-25 23:38:43.532381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.845 [2024-07-25 23:38:43.532398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.845 [2024-07-25 23:38:43.535969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.845 [2024-07-25 23:38:43.545263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.845 [2024-07-25 23:38:43.545743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.845 [2024-07-25 23:38:43.545790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:45.845 [2024-07-25 23:38:43.545807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:45.845 [2024-07-25 23:38:43.546077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:45.845 [2024-07-25 23:38:43.546321] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.845 [2024-07-25 23:38:43.546345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.845 [2024-07-25 23:38:43.546361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.845 [2024-07-25 23:38:43.549924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.845 [2024-07-25 23:38:43.559204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.845 [2024-07-25 23:38:43.559603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.845 [2024-07-25 23:38:43.559635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:45.845 [2024-07-25 23:38:43.559653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:45.845 [2024-07-25 23:38:43.559892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:45.845 [2024-07-25 23:38:43.560148] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.845 [2024-07-25 23:38:43.560172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.845 [2024-07-25 23:38:43.560188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.104 [2024-07-25 23:38:43.563755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.104 [2024-07-25 23:38:43.573236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.104 [2024-07-25 23:38:43.573643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-07-25 23:38:43.573675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.104 [2024-07-25 23:38:43.573693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.104 [2024-07-25 23:38:43.573932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.104 [2024-07-25 23:38:43.574188] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.104 [2024-07-25 23:38:43.574213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.104 [2024-07-25 23:38:43.574228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.104 [2024-07-25 23:38:43.577793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.104 [2024-07-25 23:38:43.587273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.104 [2024-07-25 23:38:43.587681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-07-25 23:38:43.587712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.104 [2024-07-25 23:38:43.587730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.104 [2024-07-25 23:38:43.587969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.104 [2024-07-25 23:38:43.588229] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.104 [2024-07-25 23:38:43.588254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.104 [2024-07-25 23:38:43.588270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.104 [2024-07-25 23:38:43.591832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.104 [2024-07-25 23:38:43.601307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.104 [2024-07-25 23:38:43.601707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-07-25 23:38:43.601734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.104 [2024-07-25 23:38:43.601750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.104 [2024-07-25 23:38:43.601989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.104 [2024-07-25 23:38:43.602244] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.104 [2024-07-25 23:38:43.602269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.104 [2024-07-25 23:38:43.602285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.104 [2024-07-25 23:38:43.605850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.104 [2024-07-25 23:38:43.615325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.104 [2024-07-25 23:38:43.615743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-07-25 23:38:43.615774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.104 [2024-07-25 23:38:43.615792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.104 [2024-07-25 23:38:43.616031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.104 [2024-07-25 23:38:43.616274] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.104 [2024-07-25 23:38:43.616295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.104 [2024-07-25 23:38:43.616309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.104 [2024-07-25 23:38:43.619886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.104 [2024-07-25 23:38:43.629379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.104 [2024-07-25 23:38:43.629792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-07-25 23:38:43.629824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.104 [2024-07-25 23:38:43.629842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.104 [2024-07-25 23:38:43.630093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.104 [2024-07-25 23:38:43.630337] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.104 [2024-07-25 23:38:43.630361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.104 [2024-07-25 23:38:43.630377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.104 [2024-07-25 23:38:43.633942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.104 [2024-07-25 23:38:43.643209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.104 [2024-07-25 23:38:43.643650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-07-25 23:38:43.643677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.104 [2024-07-25 23:38:43.643707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.104 [2024-07-25 23:38:43.643961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.104 [2024-07-25 23:38:43.644216] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.104 [2024-07-25 23:38:43.644241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.104 [2024-07-25 23:38:43.644257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.104 [2024-07-25 23:38:43.647827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.104 [2024-07-25 23:38:43.657102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.104 [2024-07-25 23:38:43.657502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-07-25 23:38:43.657529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.104 [2024-07-25 23:38:43.657561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.104 [2024-07-25 23:38:43.657812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.104 [2024-07-25 23:38:43.658055] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.104 [2024-07-25 23:38:43.658089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.104 [2024-07-25 23:38:43.658106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.104 [2024-07-25 23:38:43.661670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.104 [2024-07-25 23:38:43.670934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.104 [2024-07-25 23:38:43.671336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.105 [2024-07-25 23:38:43.671368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.105 [2024-07-25 23:38:43.671386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.105 [2024-07-25 23:38:43.671626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.105 [2024-07-25 23:38:43.671869] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.105 [2024-07-25 23:38:43.671893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.105 [2024-07-25 23:38:43.671908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.105 [2024-07-25 23:38:43.675483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.105 [2024-07-25 23:38:43.684949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.105 [2024-07-25 23:38:43.685320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.105 [2024-07-25 23:38:43.685347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.105 [2024-07-25 23:38:43.685366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.105 [2024-07-25 23:38:43.685583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.105 [2024-07-25 23:38:43.685836] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.105 [2024-07-25 23:38:43.685860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.105 [2024-07-25 23:38:43.685875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.105 [2024-07-25 23:38:43.689454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.105 [2024-07-25 23:38:43.698924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.105 [2024-07-25 23:38:43.699369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.105 [2024-07-25 23:38:43.699397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.105 [2024-07-25 23:38:43.699428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.105 [2024-07-25 23:38:43.699672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.105 [2024-07-25 23:38:43.699916] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.105 [2024-07-25 23:38:43.699940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.105 [2024-07-25 23:38:43.699956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.105 [2024-07-25 23:38:43.703533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.105 [2024-07-25 23:38:43.712794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.105 [2024-07-25 23:38:43.713217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.105 [2024-07-25 23:38:43.713248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.105 [2024-07-25 23:38:43.713266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.105 [2024-07-25 23:38:43.713505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.105 [2024-07-25 23:38:43.713748] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.105 [2024-07-25 23:38:43.713772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.105 [2024-07-25 23:38:43.713787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.105 [2024-07-25 23:38:43.717361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.105 [2024-07-25 23:38:43.726850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.105 [2024-07-25 23:38:43.727241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.105 [2024-07-25 23:38:43.727272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.105 [2024-07-25 23:38:43.727290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.105 [2024-07-25 23:38:43.727539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.105 [2024-07-25 23:38:43.727782] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.105 [2024-07-25 23:38:43.727811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.105 [2024-07-25 23:38:43.727828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.105 [2024-07-25 23:38:43.731404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.105 [2024-07-25 23:38:43.740881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.105 [2024-07-25 23:38:43.741323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.105 [2024-07-25 23:38:43.741350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.105 [2024-07-25 23:38:43.741365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.105 [2024-07-25 23:38:43.741621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.105 [2024-07-25 23:38:43.741865] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.105 [2024-07-25 23:38:43.741889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.105 [2024-07-25 23:38:43.741904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.105 [2024-07-25 23:38:43.745479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.105 [2024-07-25 23:38:43.754743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.105 [2024-07-25 23:38:43.755192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.105 [2024-07-25 23:38:43.755219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.105 [2024-07-25 23:38:43.755251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.105 [2024-07-25 23:38:43.755499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.105 [2024-07-25 23:38:43.755743] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.105 [2024-07-25 23:38:43.755767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.105 [2024-07-25 23:38:43.755782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.105 [2024-07-25 23:38:43.759361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.105 [2024-07-25 23:38:43.768625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.105 [2024-07-25 23:38:43.769012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.105 [2024-07-25 23:38:43.769043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.105 [2024-07-25 23:38:43.769070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.105 [2024-07-25 23:38:43.769320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.105 [2024-07-25 23:38:43.769575] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.105 [2024-07-25 23:38:43.769600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.105 [2024-07-25 23:38:43.769615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.105 [2024-07-25 23:38:43.773188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.105 [2024-07-25 23:38:43.782658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.105 [2024-07-25 23:38:43.783085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.105 [2024-07-25 23:38:43.783116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.105 [2024-07-25 23:38:43.783134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.105 [2024-07-25 23:38:43.783373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.106 [2024-07-25 23:38:43.783616] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.106 [2024-07-25 23:38:43.783640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.106 [2024-07-25 23:38:43.783656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.106 [2024-07-25 23:38:43.787234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.106 [2024-07-25 23:38:43.796496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.106 [2024-07-25 23:38:43.796892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.106 [2024-07-25 23:38:43.796919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.106 [2024-07-25 23:38:43.796935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.106 [2024-07-25 23:38:43.797178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.106 [2024-07-25 23:38:43.797423] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.106 [2024-07-25 23:38:43.797447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.106 [2024-07-25 23:38:43.797463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.106 [2024-07-25 23:38:43.801029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.106 [2024-07-25 23:38:43.810500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.106 [2024-07-25 23:38:43.810913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.106 [2024-07-25 23:38:43.810944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.106 [2024-07-25 23:38:43.810962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.106 [2024-07-25 23:38:43.811229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.106 [2024-07-25 23:38:43.811471] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.106 [2024-07-25 23:38:43.811495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.106 [2024-07-25 23:38:43.811511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.106 [2024-07-25 23:38:43.815088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.106 [2024-07-25 23:38:43.824356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.106 [2024-07-25 23:38:43.824764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.106 [2024-07-25 23:38:43.824795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.106 [2024-07-25 23:38:43.824813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.106 [2024-07-25 23:38:43.825068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.106 [2024-07-25 23:38:43.825312] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.106 [2024-07-25 23:38:43.825337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.106 [2024-07-25 23:38:43.825353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.364 [2024-07-25 23:38:43.828934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.364 [2024-07-25 23:38:43.838222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.364 [2024-07-25 23:38:43.838609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.364 [2024-07-25 23:38:43.838641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.364 [2024-07-25 23:38:43.838659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.364 [2024-07-25 23:38:43.838899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.364 [2024-07-25 23:38:43.839152] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.364 [2024-07-25 23:38:43.839177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.364 [2024-07-25 23:38:43.839193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.364 [2024-07-25 23:38:43.842763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.364 [2024-07-25 23:38:43.852246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.364 [2024-07-25 23:38:43.852657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.364 [2024-07-25 23:38:43.852688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.364 [2024-07-25 23:38:43.852707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.364 [2024-07-25 23:38:43.852946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.364 [2024-07-25 23:38:43.853198] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.364 [2024-07-25 23:38:43.853223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.364 [2024-07-25 23:38:43.853239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.364 [2024-07-25 23:38:43.856803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.364 [2024-07-25 23:38:43.866289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.364 [2024-07-25 23:38:43.866673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.364 [2024-07-25 23:38:43.866705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.364 [2024-07-25 23:38:43.866723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.364 [2024-07-25 23:38:43.866963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.364 [2024-07-25 23:38:43.867218] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.365 [2024-07-25 23:38:43.867243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.365 [2024-07-25 23:38:43.867264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.365 [2024-07-25 23:38:43.870831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.365 [2024-07-25 23:38:43.880120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.365 [2024-07-25 23:38:43.880540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.365 [2024-07-25 23:38:43.880572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.365 [2024-07-25 23:38:43.880590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.365 [2024-07-25 23:38:43.880829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.365 [2024-07-25 23:38:43.881085] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.365 [2024-07-25 23:38:43.881110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.365 [2024-07-25 23:38:43.881125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.365 [2024-07-25 23:38:43.884692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.365 [2024-07-25 23:38:43.893959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.365 [2024-07-25 23:38:43.894461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.365 [2024-07-25 23:38:43.894493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.365 [2024-07-25 23:38:43.894511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.365 [2024-07-25 23:38:43.894750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.365 [2024-07-25 23:38:43.894994] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.365 [2024-07-25 23:38:43.895018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.365 [2024-07-25 23:38:43.895034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.365 [2024-07-25 23:38:43.898592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.365 [2024-07-25 23:38:43.907867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.365 [2024-07-25 23:38:43.908270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.365 [2024-07-25 23:38:43.908301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.365 [2024-07-25 23:38:43.908319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.365 [2024-07-25 23:38:43.908557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.365 [2024-07-25 23:38:43.908801] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.365 [2024-07-25 23:38:43.908826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.365 [2024-07-25 23:38:43.908842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.365 [2024-07-25 23:38:43.912414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.365 [2024-07-25 23:38:43.921886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.365 [2024-07-25 23:38:43.922321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.365 [2024-07-25 23:38:43.922352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.365 [2024-07-25 23:38:43.922371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.365 [2024-07-25 23:38:43.922610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.365 [2024-07-25 23:38:43.922853] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.365 [2024-07-25 23:38:43.922877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.365 [2024-07-25 23:38:43.922893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.365 [2024-07-25 23:38:43.926476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.365 [2024-07-25 23:38:43.935752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.365 [2024-07-25 23:38:43.936171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.365 [2024-07-25 23:38:43.936200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.365 [2024-07-25 23:38:43.936216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.365 [2024-07-25 23:38:43.936466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.365 [2024-07-25 23:38:43.936710] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.365 [2024-07-25 23:38:43.936734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.365 [2024-07-25 23:38:43.936750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.365 [2024-07-25 23:38:43.940329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.365 [2024-07-25 23:38:43.949608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.365 [2024-07-25 23:38:43.949991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.365 [2024-07-25 23:38:43.950022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.365 [2024-07-25 23:38:43.950041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.365 [2024-07-25 23:38:43.950315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.365 [2024-07-25 23:38:43.950566] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.365 [2024-07-25 23:38:43.950590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.365 [2024-07-25 23:38:43.950606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.365 [2024-07-25 23:38:43.954177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.365 [2024-07-25 23:38:43.963644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.365 [2024-07-25 23:38:43.964072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.365 [2024-07-25 23:38:43.964104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.365 [2024-07-25 23:38:43.964122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.365 [2024-07-25 23:38:43.964361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.365 [2024-07-25 23:38:43.964610] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.365 [2024-07-25 23:38:43.964635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.365 [2024-07-25 23:38:43.964651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.365 [2024-07-25 23:38:43.968224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.365 [2024-07-25 23:38:43.977490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.365 [2024-07-25 23:38:43.977887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.365 [2024-07-25 23:38:43.977919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.365 [2024-07-25 23:38:43.977938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.365 [2024-07-25 23:38:43.978211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.365 [2024-07-25 23:38:43.978446] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.365 [2024-07-25 23:38:43.978470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.365 [2024-07-25 23:38:43.978486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.365 [2024-07-25 23:38:43.982049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.365 [2024-07-25 23:38:43.991521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.366 [2024-07-25 23:38:43.991903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.366 [2024-07-25 23:38:43.991934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.366 [2024-07-25 23:38:43.991952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.366 [2024-07-25 23:38:43.992226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.366 [2024-07-25 23:38:43.992476] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.366 [2024-07-25 23:38:43.992501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.366 [2024-07-25 23:38:43.992517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.366 [2024-07-25 23:38:43.996088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.366 [2024-07-25 23:38:44.005349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.366 [2024-07-25 23:38:44.005766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.366 [2024-07-25 23:38:44.005797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.366 [2024-07-25 23:38:44.005815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.366 [2024-07-25 23:38:44.006055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.366 [2024-07-25 23:38:44.006310] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.366 [2024-07-25 23:38:44.006334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.366 [2024-07-25 23:38:44.006349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.366 [2024-07-25 23:38:44.009917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.366 [2024-07-25 23:38:44.019196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.366 [2024-07-25 23:38:44.019617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.366 [2024-07-25 23:38:44.019648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.366 [2024-07-25 23:38:44.019666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.366 [2024-07-25 23:38:44.019905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.366 [2024-07-25 23:38:44.020159] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.366 [2024-07-25 23:38:44.020184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.366 [2024-07-25 23:38:44.020200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.366 [2024-07-25 23:38:44.023763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.366 [2024-07-25 23:38:44.033034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.366 [2024-07-25 23:38:44.033448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.366 [2024-07-25 23:38:44.033479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.366 [2024-07-25 23:38:44.033497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.366 [2024-07-25 23:38:44.033736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.366 [2024-07-25 23:38:44.033979] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.366 [2024-07-25 23:38:44.034003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.366 [2024-07-25 23:38:44.034019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.366 [2024-07-25 23:38:44.037592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.366 [2024-07-25 23:38:44.047067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.366 [2024-07-25 23:38:44.047474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.366 [2024-07-25 23:38:44.047505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.366 [2024-07-25 23:38:44.047524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.366 [2024-07-25 23:38:44.047763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.366 [2024-07-25 23:38:44.048007] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.366 [2024-07-25 23:38:44.048031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.366 [2024-07-25 23:38:44.048046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.366 [2024-07-25 23:38:44.051609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.366 [2024-07-25 23:38:44.061086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.366 [2024-07-25 23:38:44.061495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.366 [2024-07-25 23:38:44.061526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.366 [2024-07-25 23:38:44.061549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.366 [2024-07-25 23:38:44.061789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.366 [2024-07-25 23:38:44.062032] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.366 [2024-07-25 23:38:44.062056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.366 [2024-07-25 23:38:44.062085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.366 [2024-07-25 23:38:44.065641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.366 [2024-07-25 23:38:44.075113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.366 [2024-07-25 23:38:44.075532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.366 [2024-07-25 23:38:44.075565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.366 [2024-07-25 23:38:44.075583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.366 [2024-07-25 23:38:44.075823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.366 [2024-07-25 23:38:44.076078] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.366 [2024-07-25 23:38:44.076102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.366 [2024-07-25 23:38:44.076118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.366 [2024-07-25 23:38:44.079680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.645 [2024-07-25 23:38:44.089154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.645 [2024-07-25 23:38:44.089562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.645 [2024-07-25 23:38:44.089594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.645 [2024-07-25 23:38:44.089611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.645 [2024-07-25 23:38:44.089850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.645 [2024-07-25 23:38:44.090103] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.645 [2024-07-25 23:38:44.090128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.645 [2024-07-25 23:38:44.090145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.645 [2024-07-25 23:38:44.093708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.645 [2024-07-25 23:38:44.103184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.645 [2024-07-25 23:38:44.103566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.645 [2024-07-25 23:38:44.103597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.645 [2024-07-25 23:38:44.103615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.645 [2024-07-25 23:38:44.103855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.645 [2024-07-25 23:38:44.104114] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.645 [2024-07-25 23:38:44.104139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.645 [2024-07-25 23:38:44.104155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.645 [2024-07-25 23:38:44.107721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.645 [2024-07-25 23:38:44.117194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.645 [2024-07-25 23:38:44.117604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.645 [2024-07-25 23:38:44.117636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.645 [2024-07-25 23:38:44.117654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.645 [2024-07-25 23:38:44.117893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.645 [2024-07-25 23:38:44.118148] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.645 [2024-07-25 23:38:44.118172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.645 [2024-07-25 23:38:44.118188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.645 [2024-07-25 23:38:44.121758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.645 [2024-07-25 23:38:44.131034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.645 [2024-07-25 23:38:44.131453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.645 [2024-07-25 23:38:44.131485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.645 [2024-07-25 23:38:44.131503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.645 [2024-07-25 23:38:44.131742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.645 [2024-07-25 23:38:44.131985] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.645 [2024-07-25 23:38:44.132009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.645 [2024-07-25 23:38:44.132025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.645 [2024-07-25 23:38:44.135587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.645 [2024-07-25 23:38:44.145064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.646 [2024-07-25 23:38:44.145449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.646 [2024-07-25 23:38:44.145479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.646 [2024-07-25 23:38:44.145497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.646 [2024-07-25 23:38:44.145736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.646 [2024-07-25 23:38:44.145979] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.646 [2024-07-25 23:38:44.146003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.646 [2024-07-25 23:38:44.146019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.646 [2024-07-25 23:38:44.149588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.646 [2024-07-25 23:38:44.159071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.646 [2024-07-25 23:38:44.159499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.646 [2024-07-25 23:38:44.159548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.646 [2024-07-25 23:38:44.159566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.646 [2024-07-25 23:38:44.159805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.646 [2024-07-25 23:38:44.160049] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.646 [2024-07-25 23:38:44.160082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.646 [2024-07-25 23:38:44.160098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.646 [2024-07-25 23:38:44.163659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.646 [2024-07-25 23:38:44.172963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.646 [2024-07-25 23:38:44.173414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.646 [2024-07-25 23:38:44.173447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.646 [2024-07-25 23:38:44.173465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.646 [2024-07-25 23:38:44.173704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.646 [2024-07-25 23:38:44.173948] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.646 [2024-07-25 23:38:44.173972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.646 [2024-07-25 23:38:44.173988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.646 [2024-07-25 23:38:44.177546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.646 [2024-07-25 23:38:44.186803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.646 [2024-07-25 23:38:44.187226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.646 [2024-07-25 23:38:44.187257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.646 [2024-07-25 23:38:44.187275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.646 [2024-07-25 23:38:44.187514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.646 [2024-07-25 23:38:44.187758] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.646 [2024-07-25 23:38:44.187782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.646 [2024-07-25 23:38:44.187797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.646 [2024-07-25 23:38:44.191373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.646 [2024-07-25 23:38:44.200637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.646 [2024-07-25 23:38:44.201044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.646 [2024-07-25 23:38:44.201083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.646 [2024-07-25 23:38:44.201106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.646 [2024-07-25 23:38:44.201347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.646 [2024-07-25 23:38:44.201590] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.646 [2024-07-25 23:38:44.201615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.646 [2024-07-25 23:38:44.201630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.646 [2024-07-25 23:38:44.205207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.646 [2024-07-25 23:38:44.214471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.646 [2024-07-25 23:38:44.214881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.646 [2024-07-25 23:38:44.214912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.646 [2024-07-25 23:38:44.214929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.646 [2024-07-25 23:38:44.215178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.646 [2024-07-25 23:38:44.215421] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.646 [2024-07-25 23:38:44.215446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.646 [2024-07-25 23:38:44.215461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.646 [2024-07-25 23:38:44.219028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.646 [2024-07-25 23:38:44.228325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.646 [2024-07-25 23:38:44.228737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.646 [2024-07-25 23:38:44.228768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.646 [2024-07-25 23:38:44.228786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.646 [2024-07-25 23:38:44.229025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.646 [2024-07-25 23:38:44.229284] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.646 [2024-07-25 23:38:44.229308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.646 [2024-07-25 23:38:44.229322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.646 [2024-07-25 23:38:44.232906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.646 [2024-07-25 23:38:44.242215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.646 [2024-07-25 23:38:44.242641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.646 [2024-07-25 23:38:44.242684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.646 [2024-07-25 23:38:44.242701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.646 [2024-07-25 23:38:44.242958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.646 [2024-07-25 23:38:44.243215] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.646 [2024-07-25 23:38:44.243246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.646 [2024-07-25 23:38:44.243263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.646 [2024-07-25 23:38:44.246835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.646 [2024-07-25 23:38:44.256115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.646 [2024-07-25 23:38:44.256502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.646 [2024-07-25 23:38:44.256534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.646 [2024-07-25 23:38:44.256552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.647 [2024-07-25 23:38:44.256791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.647 [2024-07-25 23:38:44.257035] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.647 [2024-07-25 23:38:44.257067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.647 [2024-07-25 23:38:44.257086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.647 [2024-07-25 23:38:44.260635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.647 [2024-07-25 23:38:44.270123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.647 [2024-07-25 23:38:44.270510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.647 [2024-07-25 23:38:44.270541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.647 [2024-07-25 23:38:44.270559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.647 [2024-07-25 23:38:44.270799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.647 [2024-07-25 23:38:44.271042] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.647 [2024-07-25 23:38:44.271078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.647 [2024-07-25 23:38:44.271096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.647 [2024-07-25 23:38:44.274664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.647 [2024-07-25 23:38:44.284148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.647 [2024-07-25 23:38:44.284561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.647 [2024-07-25 23:38:44.284592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.647 [2024-07-25 23:38:44.284611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.647 [2024-07-25 23:38:44.284849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.647 [2024-07-25 23:38:44.285104] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.647 [2024-07-25 23:38:44.285129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.647 [2024-07-25 23:38:44.285145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.647 [2024-07-25 23:38:44.288712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.647 [2024-07-25 23:38:44.298192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.647 [2024-07-25 23:38:44.298595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.647 [2024-07-25 23:38:44.298638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.647 [2024-07-25 23:38:44.298654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.647 [2024-07-25 23:38:44.298914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.647 [2024-07-25 23:38:44.299186] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.647 [2024-07-25 23:38:44.299209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.647 [2024-07-25 23:38:44.299223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.647 [2024-07-25 23:38:44.302793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.647 [2024-07-25 23:38:44.312069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.647 [2024-07-25 23:38:44.312474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.647 [2024-07-25 23:38:44.312502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.647 [2024-07-25 23:38:44.312519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.647 [2024-07-25 23:38:44.312761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.647 [2024-07-25 23:38:44.313005] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.647 [2024-07-25 23:38:44.313029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.647 [2024-07-25 23:38:44.313045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.647 [2024-07-25 23:38:44.316617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.647 [2024-07-25 23:38:44.326098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.647 [2024-07-25 23:38:44.326505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.647 [2024-07-25 23:38:44.326532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.647 [2024-07-25 23:38:44.326548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.647 [2024-07-25 23:38:44.326791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.647 [2024-07-25 23:38:44.327035] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.647 [2024-07-25 23:38:44.327068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.647 [2024-07-25 23:38:44.327086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.647 [2024-07-25 23:38:44.330659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.647 [2024-07-25 23:38:44.339928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.647 [2024-07-25 23:38:44.340344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.647 [2024-07-25 23:38:44.340375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.647 [2024-07-25 23:38:44.340393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.647 [2024-07-25 23:38:44.340638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.647 [2024-07-25 23:38:44.340882] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.647 [2024-07-25 23:38:44.340906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.647 [2024-07-25 23:38:44.340921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.647 [2024-07-25 23:38:44.344499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.647 [2024-07-25 23:38:44.353782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.647 [2024-07-25 23:38:44.354175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.647 [2024-07-25 23:38:44.354206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.647 [2024-07-25 23:38:44.354224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.647 [2024-07-25 23:38:44.354464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.647 [2024-07-25 23:38:44.354708] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.647 [2024-07-25 23:38:44.354732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.647 [2024-07-25 23:38:44.354747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.647 [2024-07-25 23:38:44.358329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.922 [2024-07-25 23:38:44.367817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.922 [2024-07-25 23:38:44.368235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.922 [2024-07-25 23:38:44.368266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.922 [2024-07-25 23:38:44.368284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.922 [2024-07-25 23:38:44.368523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.922 [2024-07-25 23:38:44.368767] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.922 [2024-07-25 23:38:44.368791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.922 [2024-07-25 23:38:44.368806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.922 [2024-07-25 23:38:44.372392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.922 [2024-07-25 23:38:44.381676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.922 [2024-07-25 23:38:44.382088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.922 [2024-07-25 23:38:44.382120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.922 [2024-07-25 23:38:44.382139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.922 [2024-07-25 23:38:44.382378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.922 [2024-07-25 23:38:44.382621] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.922 [2024-07-25 23:38:44.382646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.922 [2024-07-25 23:38:44.382667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.922 [2024-07-25 23:38:44.386245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.922 [2024-07-25 23:38:44.395509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.922 [2024-07-25 23:38:44.395926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.922 [2024-07-25 23:38:44.395957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.922 [2024-07-25 23:38:44.395975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.922 [2024-07-25 23:38:44.396245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.922 [2024-07-25 23:38:44.396487] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.922 [2024-07-25 23:38:44.396511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.922 [2024-07-25 23:38:44.396527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.922 [2024-07-25 23:38:44.400099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.922 [2024-07-25 23:38:44.409377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.922 [2024-07-25 23:38:44.409786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.922 [2024-07-25 23:38:44.409818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.922 [2024-07-25 23:38:44.409836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.922 [2024-07-25 23:38:44.410087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.922 [2024-07-25 23:38:44.410332] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.922 [2024-07-25 23:38:44.410356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.922 [2024-07-25 23:38:44.410372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.922 [2024-07-25 23:38:44.413938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.922 [2024-07-25 23:38:44.423632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.922 [2024-07-25 23:38:44.424042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.922 [2024-07-25 23:38:44.424082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.922 [2024-07-25 23:38:44.424101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.922 [2024-07-25 23:38:44.424341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.922 [2024-07-25 23:38:44.424584] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.922 [2024-07-25 23:38:44.424608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.922 [2024-07-25 23:38:44.424623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.922 [2024-07-25 23:38:44.428224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.922 [2024-07-25 23:38:44.437518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.923 [2024-07-25 23:38:44.437937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.923 [2024-07-25 23:38:44.437973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.923 [2024-07-25 23:38:44.437992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.923 [2024-07-25 23:38:44.438274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.923 [2024-07-25 23:38:44.438520] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.923 [2024-07-25 23:38:44.438545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.923 [2024-07-25 23:38:44.438560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.923 [2024-07-25 23:38:44.442136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.923 [2024-07-25 23:38:44.451404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.923 [2024-07-25 23:38:44.451824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.923 [2024-07-25 23:38:44.451855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.923 [2024-07-25 23:38:44.451873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.923 [2024-07-25 23:38:44.452124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.923 [2024-07-25 23:38:44.452368] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.923 [2024-07-25 23:38:44.452392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.923 [2024-07-25 23:38:44.452408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.923 [2024-07-25 23:38:44.455974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.923 [2024-07-25 23:38:44.465239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.923 [2024-07-25 23:38:44.465672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.923 [2024-07-25 23:38:44.465699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.923 [2024-07-25 23:38:44.465715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.923 [2024-07-25 23:38:44.465982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.923 [2024-07-25 23:38:44.466239] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.923 [2024-07-25 23:38:44.466264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.923 [2024-07-25 23:38:44.466279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.923 [2024-07-25 23:38:44.469846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.923 [2024-07-25 23:38:44.479121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.923 [2024-07-25 23:38:44.479510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.923 [2024-07-25 23:38:44.479541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.923 [2024-07-25 23:38:44.479559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.923 [2024-07-25 23:38:44.479799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.923 [2024-07-25 23:38:44.480047] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.923 [2024-07-25 23:38:44.480082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.923 [2024-07-25 23:38:44.480099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.923 [2024-07-25 23:38:44.483666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.923 [2024-07-25 23:38:44.492949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.923 [2024-07-25 23:38:44.493399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.923 [2024-07-25 23:38:44.493430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.923 [2024-07-25 23:38:44.493448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.923 [2024-07-25 23:38:44.493688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.923 [2024-07-25 23:38:44.493932] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.923 [2024-07-25 23:38:44.493956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.923 [2024-07-25 23:38:44.493972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.923 [2024-07-25 23:38:44.497535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.923 [2024-07-25 23:38:44.506802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.923 [2024-07-25 23:38:44.507228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.923 [2024-07-25 23:38:44.507259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.923 [2024-07-25 23:38:44.507277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.923 [2024-07-25 23:38:44.507516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.923 [2024-07-25 23:38:44.507759] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.923 [2024-07-25 23:38:44.507784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.923 [2024-07-25 23:38:44.507799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.923 [2024-07-25 23:38:44.511378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.923 [2024-07-25 23:38:44.520658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.923 [2024-07-25 23:38:44.521052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.923 [2024-07-25 23:38:44.521087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.923 [2024-07-25 23:38:44.521104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.923 [2024-07-25 23:38:44.521342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.923 [2024-07-25 23:38:44.521586] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.923 [2024-07-25 23:38:44.521610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.923 [2024-07-25 23:38:44.521626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.923 [2024-07-25 23:38:44.525209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.923 [2024-07-25 23:38:44.534312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.923 [2024-07-25 23:38:44.534759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.923 [2024-07-25 23:38:44.534809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.923 [2024-07-25 23:38:44.534828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.923 [2024-07-25 23:38:44.535077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.923 [2024-07-25 23:38:44.535314] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.923 [2024-07-25 23:38:44.535336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.923 [2024-07-25 23:38:44.535368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.923 [2024-07-25 23:38:44.538982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.923 [2024-07-25 23:38:44.548340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.923 [2024-07-25 23:38:44.548720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.923 [2024-07-25 23:38:44.548751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.923 [2024-07-25 23:38:44.548769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.923 [2024-07-25 23:38:44.549009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.924 [2024-07-25 23:38:44.549263] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.924 [2024-07-25 23:38:44.549285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.924 [2024-07-25 23:38:44.549300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.924 [2024-07-25 23:38:44.552828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.924 [2024-07-25 23:38:44.562403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.924 [2024-07-25 23:38:44.562842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.924 [2024-07-25 23:38:44.562871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.924 [2024-07-25 23:38:44.562902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.924 [2024-07-25 23:38:44.563165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.924 [2024-07-25 23:38:44.563407] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.924 [2024-07-25 23:38:44.563432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.924 [2024-07-25 23:38:44.563447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.924 [2024-07-25 23:38:44.567024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.924 [2024-07-25 23:38:44.576438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.924 [2024-07-25 23:38:44.576837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.924 [2024-07-25 23:38:44.576882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.924 [2024-07-25 23:38:44.576906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.924 [2024-07-25 23:38:44.577167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.924 [2024-07-25 23:38:44.577407] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.924 [2024-07-25 23:38:44.577432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.924 [2024-07-25 23:38:44.577448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.924 [2024-07-25 23:38:44.581028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.924 [2024-07-25 23:38:44.590337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.924 [2024-07-25 23:38:44.590769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.924 [2024-07-25 23:38:44.590818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.924 [2024-07-25 23:38:44.590837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.924 [2024-07-25 23:38:44.591087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.924 [2024-07-25 23:38:44.591332] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.924 [2024-07-25 23:38:44.591356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.924 [2024-07-25 23:38:44.591371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.924 [2024-07-25 23:38:44.594941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.924 [2024-07-25 23:38:44.604243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.924 [2024-07-25 23:38:44.604648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.924 [2024-07-25 23:38:44.604699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.924 [2024-07-25 23:38:44.604718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.924 [2024-07-25 23:38:44.604957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.924 [2024-07-25 23:38:44.605212] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.924 [2024-07-25 23:38:44.605237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.924 [2024-07-25 23:38:44.605253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.924 [2024-07-25 23:38:44.608836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.924 [2024-07-25 23:38:44.618141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.924 [2024-07-25 23:38:44.618528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.924 [2024-07-25 23:38:44.618560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.924 [2024-07-25 23:38:44.618578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.924 [2024-07-25 23:38:44.618817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.924 [2024-07-25 23:38:44.619072] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.924 [2024-07-25 23:38:44.619112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.924 [2024-07-25 23:38:44.619128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.924 [2024-07-25 23:38:44.622703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.924 [2024-07-25 23:38:44.632011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.924 [2024-07-25 23:38:44.632511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.924 [2024-07-25 23:38:44.632560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:46.924 [2024-07-25 23:38:44.632579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:46.924 [2024-07-25 23:38:44.632818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:46.924 [2024-07-25 23:38:44.633072] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.924 [2024-07-25 23:38:44.633098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.924 [2024-07-25 23:38:44.633114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.924 [2024-07-25 23:38:44.636683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.185 [2024-07-25 23:38:44.645985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.185 [2024-07-25 23:38:44.646422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.185 [2024-07-25 23:38:44.646454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.185 [2024-07-25 23:38:44.646472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.185 [2024-07-25 23:38:44.646711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.185 [2024-07-25 23:38:44.646954] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.185 [2024-07-25 23:38:44.646979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.185 [2024-07-25 23:38:44.646995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.185 [2024-07-25 23:38:44.650559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.185 [2024-07-25 23:38:44.659844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.185 [2024-07-25 23:38:44.660293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.185 [2024-07-25 23:38:44.660324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.185 [2024-07-25 23:38:44.660343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.185 [2024-07-25 23:38:44.660582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.185 [2024-07-25 23:38:44.660825] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.185 [2024-07-25 23:38:44.660849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.185 [2024-07-25 23:38:44.660864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.185 [2024-07-25 23:38:44.664445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.185 [2024-07-25 23:38:44.673728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.185 [2024-07-25 23:38:44.674148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.185 [2024-07-25 23:38:44.674180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.185 [2024-07-25 23:38:44.674199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.185 [2024-07-25 23:38:44.674439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.185 [2024-07-25 23:38:44.674683] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.185 [2024-07-25 23:38:44.674707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.185 [2024-07-25 23:38:44.674723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.185 [2024-07-25 23:38:44.678306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.185 [2024-07-25 23:38:44.687576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.185 [2024-07-25 23:38:44.687975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.185 [2024-07-25 23:38:44.688002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.185 [2024-07-25 23:38:44.688018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.185 [2024-07-25 23:38:44.688247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.185 [2024-07-25 23:38:44.688496] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.185 [2024-07-25 23:38:44.688520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.185 [2024-07-25 23:38:44.688536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.185 [2024-07-25 23:38:44.692112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.185 [2024-07-25 23:38:44.701616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.185 [2024-07-25 23:38:44.702034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.185 [2024-07-25 23:38:44.702072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.185 [2024-07-25 23:38:44.702093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.185 [2024-07-25 23:38:44.702338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.185 [2024-07-25 23:38:44.702601] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.185 [2024-07-25 23:38:44.702625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.185 [2024-07-25 23:38:44.702641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.185 [2024-07-25 23:38:44.706219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.185 [2024-07-25 23:38:44.715486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.185 [2024-07-25 23:38:44.715916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.185 [2024-07-25 23:38:44.715943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.185 [2024-07-25 23:38:44.715978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.185 [2024-07-25 23:38:44.716231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.185 [2024-07-25 23:38:44.716471] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.185 [2024-07-25 23:38:44.716496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.185 [2024-07-25 23:38:44.716511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.185 [2024-07-25 23:38:44.720084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.185 [2024-07-25 23:38:44.729349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.185 [2024-07-25 23:38:44.729732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.185 [2024-07-25 23:38:44.729763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.185 [2024-07-25 23:38:44.729781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.185 [2024-07-25 23:38:44.730020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.185 [2024-07-25 23:38:44.730266] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.185 [2024-07-25 23:38:44.730287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.185 [2024-07-25 23:38:44.730301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.185 [2024-07-25 23:38:44.733884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.185 [2024-07-25 23:38:44.743376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.185 [2024-07-25 23:38:44.743782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.186 [2024-07-25 23:38:44.743814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.186 [2024-07-25 23:38:44.743833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.186 [2024-07-25 23:38:44.744083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.186 [2024-07-25 23:38:44.744327] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.186 [2024-07-25 23:38:44.744351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.186 [2024-07-25 23:38:44.744367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.186 [2024-07-25 23:38:44.747960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.186 [2024-07-25 23:38:44.757221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.186 [2024-07-25 23:38:44.757640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.186 [2024-07-25 23:38:44.757672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.186 [2024-07-25 23:38:44.757690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.186 [2024-07-25 23:38:44.757929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.186 [2024-07-25 23:38:44.758186] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.186 [2024-07-25 23:38:44.758210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.186 [2024-07-25 23:38:44.758231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.186 [2024-07-25 23:38:44.761798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.186 [2024-07-25 23:38:44.771064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.186 [2024-07-25 23:38:44.771471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.186 [2024-07-25 23:38:44.771502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.186 [2024-07-25 23:38:44.771520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.186 [2024-07-25 23:38:44.771759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.186 [2024-07-25 23:38:44.772003] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.186 [2024-07-25 23:38:44.772027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.186 [2024-07-25 23:38:44.772042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.186 [2024-07-25 23:38:44.775612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.186 [2024-07-25 23:38:44.785092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.186 [2024-07-25 23:38:44.785502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.186 [2024-07-25 23:38:44.785533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.186 [2024-07-25 23:38:44.785552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.186 [2024-07-25 23:38:44.785791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.186 [2024-07-25 23:38:44.786034] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.186 [2024-07-25 23:38:44.786068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.186 [2024-07-25 23:38:44.786086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.186 [2024-07-25 23:38:44.789653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.186 [2024-07-25 23:38:44.798920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.186 [2024-07-25 23:38:44.799336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.186 [2024-07-25 23:38:44.799368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.186 [2024-07-25 23:38:44.799386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.186 [2024-07-25 23:38:44.799625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.186 [2024-07-25 23:38:44.799868] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.186 [2024-07-25 23:38:44.799892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.186 [2024-07-25 23:38:44.799907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.186 [2024-07-25 23:38:44.803485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.186 [2024-07-25 23:38:44.812963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.186 [2024-07-25 23:38:44.813363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.186 [2024-07-25 23:38:44.813406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.186 [2024-07-25 23:38:44.813425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.186 [2024-07-25 23:38:44.813663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.186 [2024-07-25 23:38:44.813907] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.186 [2024-07-25 23:38:44.813931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.186 [2024-07-25 23:38:44.813946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.186 [2024-07-25 23:38:44.817525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.186 [2024-07-25 23:38:44.827001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.186 [2024-07-25 23:38:44.827446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.186 [2024-07-25 23:38:44.827477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.186 [2024-07-25 23:38:44.827496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.186 [2024-07-25 23:38:44.827735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.186 [2024-07-25 23:38:44.827978] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.186 [2024-07-25 23:38:44.828002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.186 [2024-07-25 23:38:44.828018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.186 [2024-07-25 23:38:44.831588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.186 [2024-07-25 23:38:44.840858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.186 [2024-07-25 23:38:44.841263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.186 [2024-07-25 23:38:44.841291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.186 [2024-07-25 23:38:44.841307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.186 [2024-07-25 23:38:44.841557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.186 [2024-07-25 23:38:44.841800] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.186 [2024-07-25 23:38:44.841824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.186 [2024-07-25 23:38:44.841840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.186 [2024-07-25 23:38:44.845423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.186 [2024-07-25 23:38:44.854701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.186 [2024-07-25 23:38:44.855112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.186 [2024-07-25 23:38:44.855144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.186 [2024-07-25 23:38:44.855162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.186 [2024-07-25 23:38:44.855407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.186 [2024-07-25 23:38:44.855651] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.187 [2024-07-25 23:38:44.855675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.187 [2024-07-25 23:38:44.855691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.187 [2024-07-25 23:38:44.859273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.187 [2024-07-25 23:38:44.868539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.187 [2024-07-25 23:38:44.868911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.187 [2024-07-25 23:38:44.868938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.187 [2024-07-25 23:38:44.868953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.187 [2024-07-25 23:38:44.869199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.187 [2024-07-25 23:38:44.869458] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.187 [2024-07-25 23:38:44.869482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.187 [2024-07-25 23:38:44.869498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.187 [2024-07-25 23:38:44.873070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.187 [2024-07-25 23:38:44.882542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.187 [2024-07-25 23:38:44.882952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.187 [2024-07-25 23:38:44.882983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.187 [2024-07-25 23:38:44.883001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.187 [2024-07-25 23:38:44.883281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.187 [2024-07-25 23:38:44.883531] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.187 [2024-07-25 23:38:44.883556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.187 [2024-07-25 23:38:44.883572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.187 [2024-07-25 23:38:44.887146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.187 [2024-07-25 23:38:44.896410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.187 [2024-07-25 23:38:44.896819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.187 [2024-07-25 23:38:44.896850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.187 [2024-07-25 23:38:44.896868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.187 [2024-07-25 23:38:44.897118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.187 [2024-07-25 23:38:44.897362] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.187 [2024-07-25 23:38:44.897386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.187 [2024-07-25 23:38:44.897407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.187 [2024-07-25 23:38:44.900977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.445 [2024-07-25 23:38:44.910268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.445 [2024-07-25 23:38:44.910687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.445 [2024-07-25 23:38:44.910719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.445 [2024-07-25 23:38:44.910737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.445 [2024-07-25 23:38:44.910977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.445 [2024-07-25 23:38:44.911233] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.445 [2024-07-25 23:38:44.911258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.445 [2024-07-25 23:38:44.911274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.445 [2024-07-25 23:38:44.914847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.445 [2024-07-25 23:38:44.924135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.445 [2024-07-25 23:38:44.924564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.445 [2024-07-25 23:38:44.924613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.445 [2024-07-25 23:38:44.924631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.445 [2024-07-25 23:38:44.924871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.445 [2024-07-25 23:38:44.925125] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.445 [2024-07-25 23:38:44.925150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.445 [2024-07-25 23:38:44.925166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.445 [2024-07-25 23:38:44.928736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.445 [2024-07-25 23:38:44.938025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.445 [2024-07-25 23:38:44.938451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.445 [2024-07-25 23:38:44.938483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.445 [2024-07-25 23:38:44.938501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.445 [2024-07-25 23:38:44.938739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.445 [2024-07-25 23:38:44.938983] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.445 [2024-07-25 23:38:44.939007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.445 [2024-07-25 23:38:44.939023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.445 [2024-07-25 23:38:44.942591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.445 [2024-07-25 23:38:44.952067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.445 [2024-07-25 23:38:44.952445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.445 [2024-07-25 23:38:44.952475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.445 [2024-07-25 23:38:44.952491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.445 [2024-07-25 23:38:44.952711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.445 [2024-07-25 23:38:44.952956] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.446 [2024-07-25 23:38:44.952979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.446 [2024-07-25 23:38:44.952995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.446 [2024-07-25 23:38:44.956564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.446 [2024-07-25 23:38:44.966073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.446 [2024-07-25 23:38:44.966503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.446 [2024-07-25 23:38:44.966535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.446 [2024-07-25 23:38:44.966553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.446 [2024-07-25 23:38:44.966792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.446 [2024-07-25 23:38:44.967035] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.446 [2024-07-25 23:38:44.967068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.446 [2024-07-25 23:38:44.967086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.446 [2024-07-25 23:38:44.970638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.446 [2024-07-25 23:38:44.980125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.446 [2024-07-25 23:38:44.980507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.446 [2024-07-25 23:38:44.980538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.446 [2024-07-25 23:38:44.980556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.446 [2024-07-25 23:38:44.980795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.446 [2024-07-25 23:38:44.981039] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.446 [2024-07-25 23:38:44.981074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.446 [2024-07-25 23:38:44.981093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.446 [2024-07-25 23:38:44.984657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.446 [2024-07-25 23:38:44.994148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.446 [2024-07-25 23:38:44.994534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.446 [2024-07-25 23:38:44.994566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.446 [2024-07-25 23:38:44.994584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.446 [2024-07-25 23:38:44.994823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.446 [2024-07-25 23:38:44.995083] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.446 [2024-07-25 23:38:44.995109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.446 [2024-07-25 23:38:44.995124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.446 [2024-07-25 23:38:44.998691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.446 [2024-07-25 23:38:45.008188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.446 [2024-07-25 23:38:45.008601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.446 [2024-07-25 23:38:45.008632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.446 [2024-07-25 23:38:45.008650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.446 [2024-07-25 23:38:45.008888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.446 [2024-07-25 23:38:45.009146] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.446 [2024-07-25 23:38:45.009171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.446 [2024-07-25 23:38:45.009186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.446 [2024-07-25 23:38:45.012762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.446 [2024-07-25 23:38:45.022070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.446 [2024-07-25 23:38:45.022462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.446 [2024-07-25 23:38:45.022494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.446 [2024-07-25 23:38:45.022513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.446 [2024-07-25 23:38:45.022752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.446 [2024-07-25 23:38:45.022995] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.446 [2024-07-25 23:38:45.023019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.446 [2024-07-25 23:38:45.023035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.446 [2024-07-25 23:38:45.026614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.446 [2024-07-25 23:38:45.035908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.446 [2024-07-25 23:38:45.036316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.446 [2024-07-25 23:38:45.036348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.446 [2024-07-25 23:38:45.036366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.446 [2024-07-25 23:38:45.036605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.446 [2024-07-25 23:38:45.036848] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.446 [2024-07-25 23:38:45.036873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.446 [2024-07-25 23:38:45.036889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.446 [2024-07-25 23:38:45.040476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.446 [2024-07-25 23:38:45.049749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.446 [2024-07-25 23:38:45.050143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.446 [2024-07-25 23:38:45.050175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.446 [2024-07-25 23:38:45.050194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.446 [2024-07-25 23:38:45.050434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.446 [2024-07-25 23:38:45.050678] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.446 [2024-07-25 23:38:45.050702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.446 [2024-07-25 23:38:45.050718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.446 [2024-07-25 23:38:45.054295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.446 [2024-07-25 23:38:45.063779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.446 [2024-07-25 23:38:45.064171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.446 [2024-07-25 23:38:45.064203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.446 [2024-07-25 23:38:45.064221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.446 [2024-07-25 23:38:45.064460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.446 [2024-07-25 23:38:45.064703] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.446 [2024-07-25 23:38:45.064728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.446 [2024-07-25 23:38:45.064744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.446 [2024-07-25 23:38:45.068324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.447 [2024-07-25 23:38:45.077810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.447 [2024-07-25 23:38:45.078209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.447 [2024-07-25 23:38:45.078240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.447 [2024-07-25 23:38:45.078258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.447 [2024-07-25 23:38:45.078497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.447 [2024-07-25 23:38:45.078740] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.447 [2024-07-25 23:38:45.078764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.447 [2024-07-25 23:38:45.078779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.447 [2024-07-25 23:38:45.082367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.447 [2024-07-25 23:38:45.091651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.447 [2024-07-25 23:38:45.092053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.447 [2024-07-25 23:38:45.092104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.447 [2024-07-25 23:38:45.092128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.447 [2024-07-25 23:38:45.092368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.447 [2024-07-25 23:38:45.092611] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.447 [2024-07-25 23:38:45.092635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.447 [2024-07-25 23:38:45.092651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.447 [2024-07-25 23:38:45.096229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.447 [2024-07-25 23:38:45.105508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.447 [2024-07-25 23:38:45.105946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.447 [2024-07-25 23:38:45.105995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.447 [2024-07-25 23:38:45.106013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.447 [2024-07-25 23:38:45.106261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.447 [2024-07-25 23:38:45.106504] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.447 [2024-07-25 23:38:45.106527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.447 [2024-07-25 23:38:45.106542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.447 [2024-07-25 23:38:45.110123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.447 [2024-07-25 23:38:45.119419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.447 [2024-07-25 23:38:45.119857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.447 [2024-07-25 23:38:45.119906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.447 [2024-07-25 23:38:45.119925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.447 [2024-07-25 23:38:45.120178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.447 [2024-07-25 23:38:45.120424] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.447 [2024-07-25 23:38:45.120450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.447 [2024-07-25 23:38:45.120467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.447 [2024-07-25 23:38:45.124040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.447 [2024-07-25 23:38:45.133370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.447 [2024-07-25 23:38:45.133833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.447 [2024-07-25 23:38:45.133865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.447 [2024-07-25 23:38:45.133884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.447 [2024-07-25 23:38:45.134135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.447 [2024-07-25 23:38:45.134379] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.447 [2024-07-25 23:38:45.134410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.447 [2024-07-25 23:38:45.134426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.447 [2024-07-25 23:38:45.138006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.447 [2024-07-25 23:38:45.147286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.447 [2024-07-25 23:38:45.147759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.447 [2024-07-25 23:38:45.147809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.447 [2024-07-25 23:38:45.147827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.447 [2024-07-25 23:38:45.148076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.447 [2024-07-25 23:38:45.148320] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.447 [2024-07-25 23:38:45.148346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.447 [2024-07-25 23:38:45.148362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.447 [2024-07-25 23:38:45.151930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.447 [2024-07-25 23:38:45.161229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.447 [2024-07-25 23:38:45.161718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.447 [2024-07-25 23:38:45.161777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.447 [2024-07-25 23:38:45.161795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.447 [2024-07-25 23:38:45.162036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.447 [2024-07-25 23:38:45.162292] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.447 [2024-07-25 23:38:45.162318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.447 [2024-07-25 23:38:45.162335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.447 [2024-07-25 23:38:45.165909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.706 [2024-07-25 23:38:45.175198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.706 [2024-07-25 23:38:45.175691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-07-25 23:38:45.175741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.706 [2024-07-25 23:38:45.175760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.706 [2024-07-25 23:38:45.176000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.706 [2024-07-25 23:38:45.176256] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.706 [2024-07-25 23:38:45.176284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.706 [2024-07-25 23:38:45.176300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.706 [2024-07-25 23:38:45.179872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.706 [2024-07-25 23:38:45.189163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.706 [2024-07-25 23:38:45.189603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-07-25 23:38:45.189636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.706 [2024-07-25 23:38:45.189656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.706 [2024-07-25 23:38:45.189896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.706 [2024-07-25 23:38:45.190156] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.706 [2024-07-25 23:38:45.190182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.706 [2024-07-25 23:38:45.190200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.706 [2024-07-25 23:38:45.193768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.706 [2024-07-25 23:38:45.203073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.706 [2024-07-25 23:38:45.203561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-07-25 23:38:45.203612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.706 [2024-07-25 23:38:45.203631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.706 [2024-07-25 23:38:45.203871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.706 [2024-07-25 23:38:45.204128] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.706 [2024-07-25 23:38:45.204155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.706 [2024-07-25 23:38:45.204172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.706 [2024-07-25 23:38:45.207740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.706 [2024-07-25 23:38:45.217025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.706 [2024-07-25 23:38:45.217523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-07-25 23:38:45.217573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.706 [2024-07-25 23:38:45.217592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.706 [2024-07-25 23:38:45.217832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.706 [2024-07-25 23:38:45.218088] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.706 [2024-07-25 23:38:45.218124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.706 [2024-07-25 23:38:45.218140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.706 [2024-07-25 23:38:45.221718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.706 [2024-07-25 23:38:45.230993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.706 [2024-07-25 23:38:45.231451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-07-25 23:38:45.231501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.706 [2024-07-25 23:38:45.231520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.706 [2024-07-25 23:38:45.231768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.706 [2024-07-25 23:38:45.232012] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.706 [2024-07-25 23:38:45.232037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.706 [2024-07-25 23:38:45.232052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.706 [2024-07-25 23:38:45.235654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.706 [2024-07-25 23:38:45.244949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.706 [2024-07-25 23:38:45.245351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-07-25 23:38:45.245384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.706 [2024-07-25 23:38:45.245403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.706 [2024-07-25 23:38:45.245642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.707 [2024-07-25 23:38:45.245887] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.707 [2024-07-25 23:38:45.245912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.707 [2024-07-25 23:38:45.245928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.707 [2024-07-25 23:38:45.249510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.707 [2024-07-25 23:38:45.258792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.707 [2024-07-25 23:38:45.259323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-07-25 23:38:45.259357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.707 [2024-07-25 23:38:45.259375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.707 [2024-07-25 23:38:45.259616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.707 [2024-07-25 23:38:45.259861] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.707 [2024-07-25 23:38:45.259886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.707 [2024-07-25 23:38:45.259902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.707 [2024-07-25 23:38:45.263511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.707 [2024-07-25 23:38:45.272782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.707 [2024-07-25 23:38:45.273181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-07-25 23:38:45.273215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.707 [2024-07-25 23:38:45.273234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.707 [2024-07-25 23:38:45.273473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.707 [2024-07-25 23:38:45.273717] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.707 [2024-07-25 23:38:45.273743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.707 [2024-07-25 23:38:45.273765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.707 [2024-07-25 23:38:45.277344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.707 [2024-07-25 23:38:45.286824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.707 [2024-07-25 23:38:45.287206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-07-25 23:38:45.287239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.707 [2024-07-25 23:38:45.287258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.707 [2024-07-25 23:38:45.287497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.707 [2024-07-25 23:38:45.287741] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.707 [2024-07-25 23:38:45.287766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.707 [2024-07-25 23:38:45.287783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.707 [2024-07-25 23:38:45.291364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.707 [2024-07-25 23:38:45.300841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.707 [2024-07-25 23:38:45.301251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-07-25 23:38:45.301284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.707 [2024-07-25 23:38:45.301303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.707 [2024-07-25 23:38:45.301542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.707 [2024-07-25 23:38:45.301786] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.707 [2024-07-25 23:38:45.301812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.707 [2024-07-25 23:38:45.301828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.707 [2024-07-25 23:38:45.305413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.707 [2024-07-25 23:38:45.314684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.707 [2024-07-25 23:38:45.315075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-07-25 23:38:45.315107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.707 [2024-07-25 23:38:45.315137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.707 [2024-07-25 23:38:45.315379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.707 [2024-07-25 23:38:45.315623] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.707 [2024-07-25 23:38:45.315648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.707 [2024-07-25 23:38:45.315665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.707 [2024-07-25 23:38:45.319244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.707 [2024-07-25 23:38:45.328719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.707 [2024-07-25 23:38:45.329137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-07-25 23:38:45.329170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.707 [2024-07-25 23:38:45.329189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.707 [2024-07-25 23:38:45.329429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.707 [2024-07-25 23:38:45.329672] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.707 [2024-07-25 23:38:45.329698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.707 [2024-07-25 23:38:45.329715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.707 [2024-07-25 23:38:45.333307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.707 [2024-07-25 23:38:45.342572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.707 [2024-07-25 23:38:45.342960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-07-25 23:38:45.342993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.707 [2024-07-25 23:38:45.343012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.707 [2024-07-25 23:38:45.343264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.707 [2024-07-25 23:38:45.343510] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.707 [2024-07-25 23:38:45.343536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.707 [2024-07-25 23:38:45.343553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.707 [2024-07-25 23:38:45.347124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.707 [2024-07-25 23:38:45.356591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.707 [2024-07-25 23:38:45.356976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-07-25 23:38:45.357009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.707 [2024-07-25 23:38:45.357028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.707 [2024-07-25 23:38:45.357280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.707 [2024-07-25 23:38:45.357526] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.707 [2024-07-25 23:38:45.357552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.707 [2024-07-25 23:38:45.357569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.708 [2024-07-25 23:38:45.361143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.708 [2024-07-25 23:38:45.370615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.708 [2024-07-25 23:38:45.370981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.708 [2024-07-25 23:38:45.371015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.708 [2024-07-25 23:38:45.371034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.708 [2024-07-25 23:38:45.371297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.708 [2024-07-25 23:38:45.371545] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.708 [2024-07-25 23:38:45.371572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.708 [2024-07-25 23:38:45.371589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.708 [2024-07-25 23:38:45.375161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.708 [2024-07-25 23:38:45.384630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.708 [2024-07-25 23:38:45.385052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.708 [2024-07-25 23:38:45.385091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.708 [2024-07-25 23:38:45.385111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.708 [2024-07-25 23:38:45.385352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.708 [2024-07-25 23:38:45.385598] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.708 [2024-07-25 23:38:45.385624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.708 [2024-07-25 23:38:45.385640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.708 [2024-07-25 23:38:45.389215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.708 [2024-07-25 23:38:45.398476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.708 [2024-07-25 23:38:45.398896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.708 [2024-07-25 23:38:45.398929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.708 [2024-07-25 23:38:45.398947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.708 [2024-07-25 23:38:45.399198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.708 [2024-07-25 23:38:45.399443] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.708 [2024-07-25 23:38:45.399469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.708 [2024-07-25 23:38:45.399486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.708 [2024-07-25 23:38:45.403052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.708 [2024-07-25 23:38:45.412325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.708 [2024-07-25 23:38:45.412721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.708 [2024-07-25 23:38:45.412757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.708 [2024-07-25 23:38:45.412776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.708 [2024-07-25 23:38:45.413015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.708 [2024-07-25 23:38:45.413269] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.708 [2024-07-25 23:38:45.413296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.708 [2024-07-25 23:38:45.413318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.708 [2024-07-25 23:38:45.416886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.708 [2024-07-25 23:38:45.426162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.708 [2024-07-25 23:38:45.426571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.708 [2024-07-25 23:38:45.426603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.708 [2024-07-25 23:38:45.426621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.708 [2024-07-25 23:38:45.426861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.708 [2024-07-25 23:38:45.427114] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.708 [2024-07-25 23:38:45.427140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.708 [2024-07-25 23:38:45.427157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.967 [2024-07-25 23:38:45.430726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.967 [2024-07-25 23:38:45.440255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.967 [2024-07-25 23:38:45.440642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.967 [2024-07-25 23:38:45.440675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.967 [2024-07-25 23:38:45.440694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.967 [2024-07-25 23:38:45.440934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.967 [2024-07-25 23:38:45.441187] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.967 [2024-07-25 23:38:45.441213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.967 [2024-07-25 23:38:45.441230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.967 [2024-07-25 23:38:45.444800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.967 [2024-07-25 23:38:45.454275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.967 [2024-07-25 23:38:45.454667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.967 [2024-07-25 23:38:45.454699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.967 [2024-07-25 23:38:45.454718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.967 [2024-07-25 23:38:45.454958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.967 [2024-07-25 23:38:45.455214] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.967 [2024-07-25 23:38:45.455241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.967 [2024-07-25 23:38:45.455258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.967 [2024-07-25 23:38:45.458829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.967 [2024-07-25 23:38:45.468303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.967 [2024-07-25 23:38:45.468717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.967 [2024-07-25 23:38:45.468754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.967 [2024-07-25 23:38:45.468773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.967 [2024-07-25 23:38:45.469014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.967 [2024-07-25 23:38:45.469268] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.967 [2024-07-25 23:38:45.469295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.967 [2024-07-25 23:38:45.469312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.967 [2024-07-25 23:38:45.472880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.967 [2024-07-25 23:38:45.482148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.967 [2024-07-25 23:38:45.482559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.967 [2024-07-25 23:38:45.482591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.967 [2024-07-25 23:38:45.482610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.967 [2024-07-25 23:38:45.482850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.967 [2024-07-25 23:38:45.483106] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.967 [2024-07-25 23:38:45.483132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.967 [2024-07-25 23:38:45.483148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.967 [2024-07-25 23:38:45.486711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.967 [2024-07-25 23:38:45.495979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.967 [2024-07-25 23:38:45.496402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.967 [2024-07-25 23:38:45.496435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.967 [2024-07-25 23:38:45.496454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.967 [2024-07-25 23:38:45.496694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.967 [2024-07-25 23:38:45.496939] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.967 [2024-07-25 23:38:45.496965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.967 [2024-07-25 23:38:45.496982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.967 [2024-07-25 23:38:45.500557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.967 [2024-07-25 23:38:45.509818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.967 [2024-07-25 23:38:45.510213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.967 [2024-07-25 23:38:45.510246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.967 [2024-07-25 23:38:45.510266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.967 [2024-07-25 23:38:45.510507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.967 [2024-07-25 23:38:45.510758] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.968 [2024-07-25 23:38:45.510784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.968 [2024-07-25 23:38:45.510801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.968 [2024-07-25 23:38:45.514377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.968 [2024-07-25 23:38:45.523849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.968 [2024-07-25 23:38:45.524283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.968 [2024-07-25 23:38:45.524318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.968 [2024-07-25 23:38:45.524337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.968 [2024-07-25 23:38:45.524579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.968 [2024-07-25 23:38:45.524824] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.968 [2024-07-25 23:38:45.524850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.968 [2024-07-25 23:38:45.524867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.968 [2024-07-25 23:38:45.528442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.968 [2024-07-25 23:38:45.537719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.968 [2024-07-25 23:38:45.538108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.968 [2024-07-25 23:38:45.538141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.968 [2024-07-25 23:38:45.538160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.968 [2024-07-25 23:38:45.538401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.968 [2024-07-25 23:38:45.538645] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.968 [2024-07-25 23:38:45.538670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.968 [2024-07-25 23:38:45.538686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.968 [2024-07-25 23:38:45.542261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.968 [2024-07-25 23:38:45.551730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.968 [2024-07-25 23:38:45.552143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.968 [2024-07-25 23:38:45.552175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.968 [2024-07-25 23:38:45.552194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.968 [2024-07-25 23:38:45.552435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.968 [2024-07-25 23:38:45.552678] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.968 [2024-07-25 23:38:45.552704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.968 [2024-07-25 23:38:45.552720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.968 [2024-07-25 23:38:45.556301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.968 [2024-07-25 23:38:45.565563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.968 [2024-07-25 23:38:45.565949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.968 [2024-07-25 23:38:45.565982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.968 [2024-07-25 23:38:45.566000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.968 [2024-07-25 23:38:45.566252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.968 [2024-07-25 23:38:45.566496] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.968 [2024-07-25 23:38:45.566522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.968 [2024-07-25 23:38:45.566539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.968 [2024-07-25 23:38:45.570110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.968 [2024-07-25 23:38:45.579581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.968 [2024-07-25 23:38:45.579991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.968 [2024-07-25 23:38:45.580023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.968 [2024-07-25 23:38:45.580042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.968 [2024-07-25 23:38:45.580291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.968 [2024-07-25 23:38:45.580536] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.968 [2024-07-25 23:38:45.580561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.968 [2024-07-25 23:38:45.580578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.968 [2024-07-25 23:38:45.584149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.968 [2024-07-25 23:38:45.593415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.968 [2024-07-25 23:38:45.593824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.968 [2024-07-25 23:38:45.593857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.968 [2024-07-25 23:38:45.593875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.968 [2024-07-25 23:38:45.594127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.968 [2024-07-25 23:38:45.594372] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.968 [2024-07-25 23:38:45.594398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.968 [2024-07-25 23:38:45.594414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.968 [2024-07-25 23:38:45.597977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.968 [2024-07-25 23:38:45.607244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.968 [2024-07-25 23:38:45.607659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.968 [2024-07-25 23:38:45.607693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.968 [2024-07-25 23:38:45.607717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.968 [2024-07-25 23:38:45.607958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.968 [2024-07-25 23:38:45.608214] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.969 [2024-07-25 23:38:45.608241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.969 [2024-07-25 23:38:45.608258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.969 [2024-07-25 23:38:45.611822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.969 [2024-07-25 23:38:45.621091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.969 [2024-07-25 23:38:45.621477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.969 [2024-07-25 23:38:45.621509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.969 [2024-07-25 23:38:45.621528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.969 [2024-07-25 23:38:45.621769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.969 [2024-07-25 23:38:45.622014] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.969 [2024-07-25 23:38:45.622040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.969 [2024-07-25 23:38:45.622057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.969 [2024-07-25 23:38:45.625633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.969 [2024-07-25 23:38:45.635125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.969 [2024-07-25 23:38:45.635548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.969 [2024-07-25 23:38:45.635580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.969 [2024-07-25 23:38:45.635599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.969 [2024-07-25 23:38:45.635839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.969 [2024-07-25 23:38:45.636094] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.969 [2024-07-25 23:38:45.636120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.969 [2024-07-25 23:38:45.636136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.969 [2024-07-25 23:38:45.639705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.969 [2024-07-25 23:38:45.648966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.969 [2024-07-25 23:38:45.649384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.969 [2024-07-25 23:38:45.649417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.969 [2024-07-25 23:38:45.649435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.969 [2024-07-25 23:38:45.649675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.969 [2024-07-25 23:38:45.649919] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.969 [2024-07-25 23:38:45.649950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.969 [2024-07-25 23:38:45.649968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.969 [2024-07-25 23:38:45.653545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.969 [2024-07-25 23:38:45.662812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.969 [2024-07-25 23:38:45.663246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.969 [2024-07-25 23:38:45.663278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.969 [2024-07-25 23:38:45.663297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.969 [2024-07-25 23:38:45.663537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.969 [2024-07-25 23:38:45.663781] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.969 [2024-07-25 23:38:45.663806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.969 [2024-07-25 23:38:45.663823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.969 [2024-07-25 23:38:45.667400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.969 [2024-07-25 23:38:45.676668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.969 [2024-07-25 23:38:45.677085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.969 [2024-07-25 23:38:45.677118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:47.969 [2024-07-25 23:38:45.677137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:47.969 [2024-07-25 23:38:45.677377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:47.969 [2024-07-25 23:38:45.677622] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.969 [2024-07-25 23:38:45.677647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.969 [2024-07-25 23:38:45.677664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.969 [2024-07-25 23:38:45.681240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.969 [2024-07-25 23:38:45.690508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.969 [2024-07-25 23:38:45.690921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.228 [2024-07-25 23:38:45.690953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.228 [2024-07-25 23:38:45.690972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.228 [2024-07-25 23:38:45.691223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.228 [2024-07-25 23:38:45.691468] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.228 [2024-07-25 23:38:45.691493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.228 [2024-07-25 23:38:45.691509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.228 [2024-07-25 23:38:45.695085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.228 [2024-07-25 23:38:45.704394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.228 [2024-07-25 23:38:45.704807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.228 [2024-07-25 23:38:45.704839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.228 [2024-07-25 23:38:45.704858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.228 [2024-07-25 23:38:45.705108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.228 [2024-07-25 23:38:45.705353] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.228 [2024-07-25 23:38:45.705379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.228 [2024-07-25 23:38:45.705396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.228 [2024-07-25 23:38:45.708963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.228 [2024-07-25 23:38:45.718243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.228 [2024-07-25 23:38:45.718653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.228 [2024-07-25 23:38:45.718685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.228 [2024-07-25 23:38:45.718704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.228 [2024-07-25 23:38:45.718944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.228 [2024-07-25 23:38:45.719200] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.228 [2024-07-25 23:38:45.719225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.228 [2024-07-25 23:38:45.719242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.228 [2024-07-25 23:38:45.722808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.228 [2024-07-25 23:38:45.732087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.228 [2024-07-25 23:38:45.732500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.228 [2024-07-25 23:38:45.732533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.228 [2024-07-25 23:38:45.732552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.228 [2024-07-25 23:38:45.732791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.228 [2024-07-25 23:38:45.733036] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.228 [2024-07-25 23:38:45.733069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.228 [2024-07-25 23:38:45.733088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.228 [2024-07-25 23:38:45.736670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.228 [2024-07-25 23:38:45.745949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.228 [2024-07-25 23:38:45.746344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.228 [2024-07-25 23:38:45.746376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.228 [2024-07-25 23:38:45.746395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.228 [2024-07-25 23:38:45.746641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.228 [2024-07-25 23:38:45.746885] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.228 [2024-07-25 23:38:45.746911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.228 [2024-07-25 23:38:45.746928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.228 [2024-07-25 23:38:45.750505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.228 [2024-07-25 23:38:45.759983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.228 [2024-07-25 23:38:45.760401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.228 [2024-07-25 23:38:45.760433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.228 [2024-07-25 23:38:45.760452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.228 [2024-07-25 23:38:45.760692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.228 [2024-07-25 23:38:45.760937] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.228 [2024-07-25 23:38:45.760962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.228 [2024-07-25 23:38:45.760979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.228 [2024-07-25 23:38:45.764555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.228 [2024-07-25 23:38:45.773841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.228 [2024-07-25 23:38:45.774264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.228 [2024-07-25 23:38:45.774297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.228 [2024-07-25 23:38:45.774316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.228 [2024-07-25 23:38:45.774556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.228 [2024-07-25 23:38:45.774801] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.228 [2024-07-25 23:38:45.774837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.229 [2024-07-25 23:38:45.774853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.229 [2024-07-25 23:38:45.778432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.229 [2024-07-25 23:38:45.787701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.229 [2024-07-25 23:38:45.788095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.229 [2024-07-25 23:38:45.788138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.229 [2024-07-25 23:38:45.788157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.229 [2024-07-25 23:38:45.788397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.229 [2024-07-25 23:38:45.788642] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.229 [2024-07-25 23:38:45.788667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.229 [2024-07-25 23:38:45.788689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.229 [2024-07-25 23:38:45.792269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.229 [2024-07-25 23:38:45.801542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.229 [2024-07-25 23:38:45.801964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.229 [2024-07-25 23:38:45.801996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.229 [2024-07-25 23:38:45.802015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.229 [2024-07-25 23:38:45.802263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.229 [2024-07-25 23:38:45.802509] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.229 [2024-07-25 23:38:45.802534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.229 [2024-07-25 23:38:45.802550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.229 [2024-07-25 23:38:45.806127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.229 [2024-07-25 23:38:45.815395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.229 [2024-07-25 23:38:45.815810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.229 [2024-07-25 23:38:45.815843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.229 [2024-07-25 23:38:45.815861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.229 [2024-07-25 23:38:45.816118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.229 [2024-07-25 23:38:45.816361] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.229 [2024-07-25 23:38:45.816387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.229 [2024-07-25 23:38:45.816404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.229 [2024-07-25 23:38:45.819968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.229 [2024-07-25 23:38:45.829262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.229 [2024-07-25 23:38:45.829675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.229 [2024-07-25 23:38:45.829707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.229 [2024-07-25 23:38:45.829725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.229 [2024-07-25 23:38:45.829964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.229 [2024-07-25 23:38:45.830217] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.229 [2024-07-25 23:38:45.830243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.229 [2024-07-25 23:38:45.830259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.229 [2024-07-25 23:38:45.833828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.229 [2024-07-25 23:38:45.843122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.229 [2024-07-25 23:38:45.843564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.229 [2024-07-25 23:38:45.843597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.229 [2024-07-25 23:38:45.843617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.229 [2024-07-25 23:38:45.843856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.229 [2024-07-25 23:38:45.844113] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.229 [2024-07-25 23:38:45.844139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.229 [2024-07-25 23:38:45.844156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.229 [2024-07-25 23:38:45.847723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.229 [2024-07-25 23:38:45.856986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.229 [2024-07-25 23:38:45.857391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.229 [2024-07-25 23:38:45.857424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.229 [2024-07-25 23:38:45.857443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.229 [2024-07-25 23:38:45.857682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.229 [2024-07-25 23:38:45.857926] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.229 [2024-07-25 23:38:45.857951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.229 [2024-07-25 23:38:45.857967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.229 [2024-07-25 23:38:45.861540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.229 [2024-07-25 23:38:45.871009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.229 [2024-07-25 23:38:45.871432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.229 [2024-07-25 23:38:45.871464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.229 [2024-07-25 23:38:45.871483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.229 [2024-07-25 23:38:45.871722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.229 [2024-07-25 23:38:45.871966] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.229 [2024-07-25 23:38:45.871992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.229 [2024-07-25 23:38:45.872008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.229 [2024-07-25 23:38:45.875582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.229 [2024-07-25 23:38:45.884842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.229 [2024-07-25 23:38:45.885275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.229 [2024-07-25 23:38:45.885308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.229 [2024-07-25 23:38:45.885326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.229 [2024-07-25 23:38:45.885567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.229 [2024-07-25 23:38:45.885819] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.229 [2024-07-25 23:38:45.885845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.229 [2024-07-25 23:38:45.885862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.229 [2024-07-25 23:38:45.889438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.229 [2024-07-25 23:38:45.898696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.229 [2024-07-25 23:38:45.899109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.230 [2024-07-25 23:38:45.899142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.230 [2024-07-25 23:38:45.899161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.230 [2024-07-25 23:38:45.899401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.230 [2024-07-25 23:38:45.899645] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.230 [2024-07-25 23:38:45.899670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.230 [2024-07-25 23:38:45.899686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.230 [2024-07-25 23:38:45.903262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.230 [2024-07-25 23:38:45.912731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.230 [2024-07-25 23:38:45.913118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.230 [2024-07-25 23:38:45.913151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.230 [2024-07-25 23:38:45.913170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.230 [2024-07-25 23:38:45.913410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.230 [2024-07-25 23:38:45.913654] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.230 [2024-07-25 23:38:45.913680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.230 [2024-07-25 23:38:45.913697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.230 [2024-07-25 23:38:45.917272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.230 [2024-07-25 23:38:45.926744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.230 [2024-07-25 23:38:45.927157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.230 [2024-07-25 23:38:45.927190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.230 [2024-07-25 23:38:45.927209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.230 [2024-07-25 23:38:45.927450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.230 [2024-07-25 23:38:45.927697] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.230 [2024-07-25 23:38:45.927723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.230 [2024-07-25 23:38:45.927739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.230 [2024-07-25 23:38:45.931323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.230 [2024-07-25 23:38:45.940598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.230 [2024-07-25 23:38:45.941023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.230 [2024-07-25 23:38:45.941056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.230 [2024-07-25 23:38:45.941084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.230 [2024-07-25 23:38:45.941325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.230 [2024-07-25 23:38:45.941570] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.230 [2024-07-25 23:38:45.941597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.230 [2024-07-25 23:38:45.941614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.230 [2024-07-25 23:38:45.945184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.488 [2024-07-25 23:38:45.954453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.488 [2024-07-25 23:38:45.954865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.488 [2024-07-25 23:38:45.954897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.488 [2024-07-25 23:38:45.954916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.488 [2024-07-25 23:38:45.955170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.488 [2024-07-25 23:38:45.955416] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.488 [2024-07-25 23:38:45.955443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.488 [2024-07-25 23:38:45.955459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.488 [2024-07-25 23:38:45.959026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.488 [2024-07-25 23:38:45.968323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.488 [2024-07-25 23:38:45.968736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.488 [2024-07-25 23:38:45.968769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.488 [2024-07-25 23:38:45.968788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.488 [2024-07-25 23:38:45.969028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.488 [2024-07-25 23:38:45.969282] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.488 [2024-07-25 23:38:45.969308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.488 [2024-07-25 23:38:45.969324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.488 [2024-07-25 23:38:45.972887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.488 [2024-07-25 23:38:45.982153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.488 [2024-07-25 23:38:45.982544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.488 [2024-07-25 23:38:45.982583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.488 [2024-07-25 23:38:45.982603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.488 [2024-07-25 23:38:45.982844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.488 [2024-07-25 23:38:45.983102] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.488 [2024-07-25 23:38:45.983128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.488 [2024-07-25 23:38:45.983145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.488 [2024-07-25 23:38:45.986707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.488 [2024-07-25 23:38:45.996185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.488 [2024-07-25 23:38:45.996599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.488 [2024-07-25 23:38:45.996632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.488 [2024-07-25 23:38:45.996652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.488 [2024-07-25 23:38:45.996893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.488 [2024-07-25 23:38:45.997150] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.488 [2024-07-25 23:38:45.997177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.488 [2024-07-25 23:38:45.997194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.488 [2024-07-25 23:38:46.000759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.488 [2024-07-25 23:38:46.010023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.488 [2024-07-25 23:38:46.010425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.488 [2024-07-25 23:38:46.010458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.488 [2024-07-25 23:38:46.010477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.488 [2024-07-25 23:38:46.010717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.488 [2024-07-25 23:38:46.010959] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.489 [2024-07-25 23:38:46.010986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.489 [2024-07-25 23:38:46.011003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.489 [2024-07-25 23:38:46.014576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.489 [2024-07-25 23:38:46.024044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.489 [2024-07-25 23:38:46.024461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.489 [2024-07-25 23:38:46.024494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.489 [2024-07-25 23:38:46.024512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.489 [2024-07-25 23:38:46.024752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.489 [2024-07-25 23:38:46.025001] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.489 [2024-07-25 23:38:46.025027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.489 [2024-07-25 23:38:46.025043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.489 [2024-07-25 23:38:46.028618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.489 [2024-07-25 23:38:46.037901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.489 [2024-07-25 23:38:46.038307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.489 [2024-07-25 23:38:46.038341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.489 [2024-07-25 23:38:46.038359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.489 [2024-07-25 23:38:46.038599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.489 [2024-07-25 23:38:46.038844] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.489 [2024-07-25 23:38:46.038870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.489 [2024-07-25 23:38:46.038887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.489 [2024-07-25 23:38:46.042459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.489 [2024-07-25 23:38:46.051930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.489 [2024-07-25 23:38:46.052361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.489 [2024-07-25 23:38:46.052394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.489 [2024-07-25 23:38:46.052412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.489 [2024-07-25 23:38:46.052652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.489 [2024-07-25 23:38:46.052896] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.489 [2024-07-25 23:38:46.052922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.489 [2024-07-25 23:38:46.052938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.489 [2024-07-25 23:38:46.056511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.489 [2024-07-25 23:38:46.065769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.489 [2024-07-25 23:38:46.066157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.489 [2024-07-25 23:38:46.066190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.489 [2024-07-25 23:38:46.066209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.489 [2024-07-25 23:38:46.066449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.489 [2024-07-25 23:38:46.066692] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.489 [2024-07-25 23:38:46.066718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.489 [2024-07-25 23:38:46.066735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.489 [2024-07-25 23:38:46.070312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.489 [2024-07-25 23:38:46.079788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.489 [2024-07-25 23:38:46.080201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.489 [2024-07-25 23:38:46.080234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.489 [2024-07-25 23:38:46.080253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.489 [2024-07-25 23:38:46.080493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.489 [2024-07-25 23:38:46.080737] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.489 [2024-07-25 23:38:46.080762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.489 [2024-07-25 23:38:46.080779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.489 [2024-07-25 23:38:46.084353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.489 [2024-07-25 23:38:46.093820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.489 [2024-07-25 23:38:46.094247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.489 [2024-07-25 23:38:46.094280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.489 [2024-07-25 23:38:46.094299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.489 [2024-07-25 23:38:46.094539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.489 [2024-07-25 23:38:46.094783] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.489 [2024-07-25 23:38:46.094808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.489 [2024-07-25 23:38:46.094825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.489 [2024-07-25 23:38:46.098397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.489 [2024-07-25 23:38:46.107654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.489 [2024-07-25 23:38:46.108084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.489 [2024-07-25 23:38:46.108117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.489 [2024-07-25 23:38:46.108136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.489 [2024-07-25 23:38:46.108376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.489 [2024-07-25 23:38:46.108622] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.489 [2024-07-25 23:38:46.108647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.489 [2024-07-25 23:38:46.108664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.489 [2024-07-25 23:38:46.112240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.489 [2024-07-25 23:38:46.121504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.489 [2024-07-25 23:38:46.121897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.489 [2024-07-25 23:38:46.121929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.489 [2024-07-25 23:38:46.121954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.489 [2024-07-25 23:38:46.122204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.489 [2024-07-25 23:38:46.122449] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.489 [2024-07-25 23:38:46.122474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.489 [2024-07-25 23:38:46.122491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.489 [2024-07-25 23:38:46.126054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.490 [2024-07-25 23:38:46.135531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.490 [2024-07-25 23:38:46.135949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.490 [2024-07-25 23:38:46.135982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.490 [2024-07-25 23:38:46.136000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.490 [2024-07-25 23:38:46.136250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.490 [2024-07-25 23:38:46.136506] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.490 [2024-07-25 23:38:46.136532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.490 [2024-07-25 23:38:46.136550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.490 [2024-07-25 23:38:46.140128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.490 [2024-07-25 23:38:46.149387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.490 [2024-07-25 23:38:46.149800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.490 [2024-07-25 23:38:46.149833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.490 [2024-07-25 23:38:46.149853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.490 [2024-07-25 23:38:46.150104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.490 [2024-07-25 23:38:46.150349] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.490 [2024-07-25 23:38:46.150375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.490 [2024-07-25 23:38:46.150392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.490 [2024-07-25 23:38:46.153959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.490 [2024-07-25 23:38:46.163235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.490 [2024-07-25 23:38:46.163606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.490 [2024-07-25 23:38:46.163638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.490 [2024-07-25 23:38:46.163657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.490 [2024-07-25 23:38:46.163897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.490 [2024-07-25 23:38:46.164151] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.490 [2024-07-25 23:38:46.164182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.490 [2024-07-25 23:38:46.164198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.490 [2024-07-25 23:38:46.167785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.490 [2024-07-25 23:38:46.177083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.490 [2024-07-25 23:38:46.177503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.490 [2024-07-25 23:38:46.177536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.490 [2024-07-25 23:38:46.177554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.490 [2024-07-25 23:38:46.177793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.490 [2024-07-25 23:38:46.178037] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.490 [2024-07-25 23:38:46.178071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.490 [2024-07-25 23:38:46.178090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.490 [2024-07-25 23:38:46.181657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.490 [2024-07-25 23:38:46.190933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.490 [2024-07-25 23:38:46.191364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.490 [2024-07-25 23:38:46.191396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.490 [2024-07-25 23:38:46.191415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.490 [2024-07-25 23:38:46.191655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.490 [2024-07-25 23:38:46.191899] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.490 [2024-07-25 23:38:46.191924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.490 [2024-07-25 23:38:46.191940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.490 [2024-07-25 23:38:46.195513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.490 [2024-07-25 23:38:46.204776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.490 [2024-07-25 23:38:46.205182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.490 [2024-07-25 23:38:46.205215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.490 [2024-07-25 23:38:46.205235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.490 [2024-07-25 23:38:46.205475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.490 [2024-07-25 23:38:46.205720] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.490 [2024-07-25 23:38:46.205745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.490 [2024-07-25 23:38:46.205761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.490 [2024-07-25 23:38:46.209332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.748 [2024-07-25 23:38:46.218803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.748 [2024-07-25 23:38:46.219236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.748 [2024-07-25 23:38:46.219270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.748 [2024-07-25 23:38:46.219289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.748 [2024-07-25 23:38:46.219530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.748 [2024-07-25 23:38:46.219776] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.748 [2024-07-25 23:38:46.219801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.748 [2024-07-25 23:38:46.219818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.748 [2024-07-25 23:38:46.223393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.748 [2024-07-25 23:38:46.232673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.748 [2024-07-25 23:38:46.233108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.748 [2024-07-25 23:38:46.233143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.748 [2024-07-25 23:38:46.233162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.748 [2024-07-25 23:38:46.233403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.748 [2024-07-25 23:38:46.233648] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.748 [2024-07-25 23:38:46.233674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.748 [2024-07-25 23:38:46.233691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.748 [2024-07-25 23:38:46.237293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.748 [2024-07-25 23:38:46.246584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.748 [2024-07-25 23:38:46.247003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.748 [2024-07-25 23:38:46.247036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.748 [2024-07-25 23:38:46.247055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.748 [2024-07-25 23:38:46.247305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.748 [2024-07-25 23:38:46.247550] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.748 [2024-07-25 23:38:46.247577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.748 [2024-07-25 23:38:46.247594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.748 [2024-07-25 23:38:46.251166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.748 [2024-07-25 23:38:46.260453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.748 [2024-07-25 23:38:46.260876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.748 [2024-07-25 23:38:46.260908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.748 [2024-07-25 23:38:46.260926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.748 [2024-07-25 23:38:46.261182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.748 [2024-07-25 23:38:46.261425] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.748 [2024-07-25 23:38:46.261450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.748 [2024-07-25 23:38:46.261467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.748 [2024-07-25 23:38:46.265032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.748 [2024-07-25 23:38:46.274314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.748 [2024-07-25 23:38:46.274682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.748 [2024-07-25 23:38:46.274714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.748 [2024-07-25 23:38:46.274733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.748 [2024-07-25 23:38:46.274973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.748 [2024-07-25 23:38:46.275225] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.748 [2024-07-25 23:38:46.275251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.748 [2024-07-25 23:38:46.275267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.748 [2024-07-25 23:38:46.278837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.748 [2024-07-25 23:38:46.288320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.748 [2024-07-25 23:38:46.288730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.748 [2024-07-25 23:38:46.288763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.749 [2024-07-25 23:38:46.288782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.749 [2024-07-25 23:38:46.289022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.749 [2024-07-25 23:38:46.289274] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.749 [2024-07-25 23:38:46.289300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.749 [2024-07-25 23:38:46.289324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.749 [2024-07-25 23:38:46.292897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.749 [2024-07-25 23:38:46.302170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.749 [2024-07-25 23:38:46.302556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.749 [2024-07-25 23:38:46.302588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.749 [2024-07-25 23:38:46.302607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.749 [2024-07-25 23:38:46.302846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.749 [2024-07-25 23:38:46.303100] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.749 [2024-07-25 23:38:46.303125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.749 [2024-07-25 23:38:46.303147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.749 [2024-07-25 23:38:46.306711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.749 [2024-07-25 23:38:46.316203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.749 [2024-07-25 23:38:46.316587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.749 [2024-07-25 23:38:46.316620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.749 [2024-07-25 23:38:46.316639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.749 [2024-07-25 23:38:46.316879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.749 [2024-07-25 23:38:46.317133] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.749 [2024-07-25 23:38:46.317159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.749 [2024-07-25 23:38:46.317175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.749 [2024-07-25 23:38:46.320741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.749 [2024-07-25 23:38:46.330217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.749 [2024-07-25 23:38:46.330634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.749 [2024-07-25 23:38:46.330666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.749 [2024-07-25 23:38:46.330684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.749 [2024-07-25 23:38:46.330924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.749 [2024-07-25 23:38:46.331179] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.749 [2024-07-25 23:38:46.331205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.749 [2024-07-25 23:38:46.331222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.749 [2024-07-25 23:38:46.334786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.749 [2024-07-25 23:38:46.344081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.749 [2024-07-25 23:38:46.344526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.749 [2024-07-25 23:38:46.344560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.749 [2024-07-25 23:38:46.344580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.749 [2024-07-25 23:38:46.344821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.749 [2024-07-25 23:38:46.345076] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.749 [2024-07-25 23:38:46.345102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.749 [2024-07-25 23:38:46.345118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.749 [2024-07-25 23:38:46.348685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.749 [2024-07-25 23:38:46.357944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.749 [2024-07-25 23:38:46.358385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.749 [2024-07-25 23:38:46.358432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.749 [2024-07-25 23:38:46.358453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.749 [2024-07-25 23:38:46.358695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.749 [2024-07-25 23:38:46.358941] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.749 [2024-07-25 23:38:46.358967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.749 [2024-07-25 23:38:46.358984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.749 [2024-07-25 23:38:46.362560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.749 [2024-07-25 23:38:46.371819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.749 [2024-07-25 23:38:46.372228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.749 [2024-07-25 23:38:46.372260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.749 [2024-07-25 23:38:46.372279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.749 [2024-07-25 23:38:46.372519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.749 [2024-07-25 23:38:46.372763] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.749 [2024-07-25 23:38:46.372788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.749 [2024-07-25 23:38:46.372805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.749 [2024-07-25 23:38:46.376382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.749 [2024-07-25 23:38:46.385850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.749 [2024-07-25 23:38:46.386254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.749 [2024-07-25 23:38:46.386288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.749 [2024-07-25 23:38:46.386307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.749 [2024-07-25 23:38:46.386547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.749 [2024-07-25 23:38:46.386791] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.749 [2024-07-25 23:38:46.386816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.749 [2024-07-25 23:38:46.386834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1532501 Killed "${NVMF_APP[@]}" "$@" 00:32:48.749 [2024-07-25 23:38:46.390408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.749 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:32:48.749 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:48.749 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:48.749 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:48.749 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:48.749 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1533455 00:32:48.750 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:48.750 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1533455 00:32:48.750 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1533455 ']' 00:32:48.750 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.750 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:48.750 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.750 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:48.750 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:48.750 [2024-07-25 23:38:46.399896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.750 [2024-07-25 23:38:46.400296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.750 [2024-07-25 23:38:46.400331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.750 [2024-07-25 23:38:46.400351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.750 [2024-07-25 23:38:46.400592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.750 [2024-07-25 23:38:46.400837] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.750 [2024-07-25 23:38:46.400862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.750 [2024-07-25 23:38:46.400880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.750 [2024-07-25 23:38:46.404454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.750 [2024-07-25 23:38:46.413929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.750 [2024-07-25 23:38:46.414349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.750 [2024-07-25 23:38:46.414381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.750 [2024-07-25 23:38:46.414400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.750 [2024-07-25 23:38:46.414641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.750 [2024-07-25 23:38:46.414885] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.750 [2024-07-25 23:38:46.414910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.750 [2024-07-25 23:38:46.414927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.750 [2024-07-25 23:38:46.418502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.750 [2024-07-25 23:38:46.427772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.750 [2024-07-25 23:38:46.428162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.750 [2024-07-25 23:38:46.428194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.750 [2024-07-25 23:38:46.428213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.750 [2024-07-25 23:38:46.428459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.750 [2024-07-25 23:38:46.428704] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.750 [2024-07-25 23:38:46.428729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.750 [2024-07-25 23:38:46.428747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.750 [2024-07-25 23:38:46.432322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.750 [2024-07-25 23:38:46.441621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.750 [2024-07-25 23:38:46.442042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.750 [2024-07-25 23:38:46.442081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.750 [2024-07-25 23:38:46.442100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.750 [2024-07-25 23:38:46.442340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.750 [2024-07-25 23:38:46.442585] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.750 [2024-07-25 23:38:46.442609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.750 [2024-07-25 23:38:46.442625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.750 [2024-07-25 23:38:46.445647] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:32:48.750 [2024-07-25 23:38:46.445732] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:48.750 [2024-07-25 23:38:46.446196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.750 [2024-07-25 23:38:46.455860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.750 [2024-07-25 23:38:46.456310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.750 [2024-07-25 23:38:46.456342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.750 [2024-07-25 23:38:46.456361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.750 [2024-07-25 23:38:46.456601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.750 [2024-07-25 23:38:46.456846] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.750 [2024-07-25 23:38:46.456870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.750 [2024-07-25 23:38:46.456886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.750 [2024-07-25 23:38:46.460459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.750 [2024-07-25 23:38:46.469723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.750 [2024-07-25 23:38:46.470120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.750 [2024-07-25 23:38:46.470158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:48.750 [2024-07-25 23:38:46.470177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:48.750 [2024-07-25 23:38:46.470420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:48.750 [2024-07-25 23:38:46.470669] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.750 [2024-07-25 23:38:46.470694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.750 [2024-07-25 23:38:46.470711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.010 [2024-07-25 23:38:46.474290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.010 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.010 [2024-07-25 23:38:46.483552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.010 [2024-07-25 23:38:46.483980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.010 [2024-07-25 23:38:46.484012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.010 [2024-07-25 23:38:46.484036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.010 [2024-07-25 23:38:46.484287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.010 [2024-07-25 23:38:46.484531] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.010 [2024-07-25 23:38:46.484557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.010 [2024-07-25 23:38:46.484573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.010 [2024-07-25 23:38:46.484901] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:49.010 [2024-07-25 23:38:46.488146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.010 [2024-07-25 23:38:46.497407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.010 [2024-07-25 23:38:46.497872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.010 [2024-07-25 23:38:46.497902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.010 [2024-07-25 23:38:46.497919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.010 [2024-07-25 23:38:46.498160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.010 [2024-07-25 23:38:46.498397] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.010 [2024-07-25 23:38:46.498417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.010 [2024-07-25 23:38:46.498432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.010 [2024-07-25 23:38:46.501510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.010 [2024-07-25 23:38:46.510752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.010 [2024-07-25 23:38:46.511176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.010 [2024-07-25 23:38:46.511206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.010 [2024-07-25 23:38:46.511233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.010 [2024-07-25 23:38:46.511475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.010 [2024-07-25 23:38:46.511676] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.010 [2024-07-25 23:38:46.511700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.010 [2024-07-25 23:38:46.511715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.010 [2024-07-25 23:38:46.514230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:49.010 [2024-07-25 23:38:46.514801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.010 [2024-07-25 23:38:46.524193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.010 [2024-07-25 23:38:46.524772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.010 [2024-07-25 23:38:46.524824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.010 [2024-07-25 23:38:46.524844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.010 [2024-07-25 23:38:46.525130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.010 [2024-07-25 23:38:46.525343] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.010 [2024-07-25 23:38:46.525365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.010 [2024-07-25 23:38:46.525382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.010 [2024-07-25 23:38:46.528484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.010 [2024-07-25 23:38:46.537755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.010 [2024-07-25 23:38:46.538261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.010 [2024-07-25 23:38:46.538305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.010 [2024-07-25 23:38:46.538324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.010 [2024-07-25 23:38:46.538582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.010 [2024-07-25 23:38:46.538785] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.010 [2024-07-25 23:38:46.538807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.010 [2024-07-25 23:38:46.538822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.010 [2024-07-25 23:38:46.541942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.010 [2024-07-25 23:38:46.551126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.010 [2024-07-25 23:38:46.551524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.010 [2024-07-25 23:38:46.551554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.010 [2024-07-25 23:38:46.551571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.010 [2024-07-25 23:38:46.551823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.010 [2024-07-25 23:38:46.552026] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.010 [2024-07-25 23:38:46.552047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.010 [2024-07-25 23:38:46.552086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.010 [2024-07-25 23:38:46.555155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.010 [2024-07-25 23:38:46.564575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.010 [2024-07-25 23:38:46.565129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.010 [2024-07-25 23:38:46.565179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.010 [2024-07-25 23:38:46.565200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.010 [2024-07-25 23:38:46.565453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.010 [2024-07-25 23:38:46.565659] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.010 [2024-07-25 23:38:46.565680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.010 [2024-07-25 23:38:46.565697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.010 [2024-07-25 23:38:46.568783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.010 [2024-07-25 23:38:46.577850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.010 [2024-07-25 23:38:46.578382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.010 [2024-07-25 23:38:46.578430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.010 [2024-07-25 23:38:46.578451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.010 [2024-07-25 23:38:46.578697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.010 [2024-07-25 23:38:46.578902] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.010 [2024-07-25 23:38:46.578922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.010 [2024-07-25 23:38:46.578939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.010 [2024-07-25 23:38:46.582018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.010 [2024-07-25 23:38:46.591221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.010 [2024-07-25 23:38:46.591624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.010 [2024-07-25 23:38:46.591655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.011 [2024-07-25 23:38:46.591672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.011 [2024-07-25 23:38:46.591927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.011 [2024-07-25 23:38:46.592171] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.011 [2024-07-25 23:38:46.592194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.011 [2024-07-25 23:38:46.592209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.011 [2024-07-25 23:38:46.595285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.011 [2024-07-25 23:38:46.603464] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:49.011 [2024-07-25 23:38:46.603500] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:49.011 [2024-07-25 23:38:46.603521] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:49.011 [2024-07-25 23:38:46.603532] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:49.011 [2024-07-25 23:38:46.603551] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:49.011 [2024-07-25 23:38:46.603678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:49.011 [2024-07-25 23:38:46.603744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:49.011 [2024-07-25 23:38:46.603746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.011 [2024-07-25 23:38:46.604780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.011 [2024-07-25 23:38:46.605164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.011 [2024-07-25 23:38:46.605204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.011 [2024-07-25 23:38:46.605222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.011 [2024-07-25 23:38:46.605462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.011 [2024-07-25 23:38:46.605678] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.011 [2024-07-25 23:38:46.605701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.011 [2024-07-25 23:38:46.605717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.011 [2024-07-25 23:38:46.608874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.011 [2024-07-25 23:38:46.618386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.011 [2024-07-25 23:38:46.618939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.011 [2024-07-25 23:38:46.618991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.011 [2024-07-25 23:38:46.619012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.011 [2024-07-25 23:38:46.619247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.011 [2024-07-25 23:38:46.619485] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.011 [2024-07-25 23:38:46.619508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.011 [2024-07-25 23:38:46.619526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.011 [2024-07-25 23:38:46.622782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.011 [2024-07-25 23:38:46.632026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.011 [2024-07-25 23:38:46.632619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.011 [2024-07-25 23:38:46.632665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.011 [2024-07-25 23:38:46.632687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.011 [2024-07-25 23:38:46.632952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.011 [2024-07-25 23:38:46.633202] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.011 [2024-07-25 23:38:46.633225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.011 [2024-07-25 23:38:46.633243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.011 [2024-07-25 23:38:46.636428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.011 [2024-07-25 23:38:46.645656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.011 [2024-07-25 23:38:46.646223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.011 [2024-07-25 23:38:46.646270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.011 [2024-07-25 23:38:46.646293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.011 [2024-07-25 23:38:46.646558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.011 [2024-07-25 23:38:46.646771] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.011 [2024-07-25 23:38:46.646794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.011 [2024-07-25 23:38:46.646812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.011 [2024-07-25 23:38:46.649949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.011 [2024-07-25 23:38:46.659309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.011 [2024-07-25 23:38:46.659824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.011 [2024-07-25 23:38:46.659873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.011 [2024-07-25 23:38:46.659893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.011 [2024-07-25 23:38:46.660148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.011 [2024-07-25 23:38:46.660379] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.011 [2024-07-25 23:38:46.660402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.011 [2024-07-25 23:38:46.660418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.011 [2024-07-25 23:38:46.663634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.011 [2024-07-25 23:38:46.672834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.011 [2024-07-25 23:38:46.673437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.011 [2024-07-25 23:38:46.673485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.011 [2024-07-25 23:38:46.673509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.011 [2024-07-25 23:38:46.673765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.011 [2024-07-25 23:38:46.673981] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.011 [2024-07-25 23:38:46.674004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.011 [2024-07-25 23:38:46.674022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.011 [2024-07-25 23:38:46.677238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.011 [2024-07-25 23:38:46.686429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.011 [2024-07-25 23:38:46.686972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.011 [2024-07-25 23:38:46.687014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.011 [2024-07-25 23:38:46.687036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.011 [2024-07-25 23:38:46.687304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.011 [2024-07-25 23:38:46.687532] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.011 [2024-07-25 23:38:46.687556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.012 [2024-07-25 23:38:46.687573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.012 [2024-07-25 23:38:46.690710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.012 [2024-07-25 23:38:46.700028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.012 [2024-07-25 23:38:46.700415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.012 [2024-07-25 23:38:46.700446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.012 [2024-07-25 23:38:46.700463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.012 [2024-07-25 23:38:46.700697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.012 [2024-07-25 23:38:46.700920] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.012 [2024-07-25 23:38:46.700943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.012 [2024-07-25 23:38:46.700957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.012 [2024-07-25 23:38:46.704228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.012 [2024-07-25 23:38:46.713721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.012 [2024-07-25 23:38:46.714111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.012 [2024-07-25 23:38:46.714141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.012 [2024-07-25 23:38:46.714158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.012 [2024-07-25 23:38:46.714389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.012 [2024-07-25 23:38:46.714603] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.012 [2024-07-25 23:38:46.714626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.012 [2024-07-25 23:38:46.714640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.012 [2024-07-25 23:38:46.717882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.012 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:49.012 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:32:49.012 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:49.012 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:49.012 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:49.012 [2024-07-25 23:38:46.727284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.012 [2024-07-25 23:38:46.727750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.012 [2024-07-25 23:38:46.727780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.012 [2024-07-25 23:38:46.727803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.012 [2024-07-25 23:38:46.728048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.012 [2024-07-25 23:38:46.728283] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.012 [2024-07-25 23:38:46.728306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.012 [2024-07-25 23:38:46.728322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.012 [2024-07-25 23:38:46.731617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.271 [2024-07-25 23:38:46.740828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.271 [2024-07-25 23:38:46.741238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.271 [2024-07-25 23:38:46.741267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.271 [2024-07-25 23:38:46.741284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.271 [2024-07-25 23:38:46.741516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.271 [2024-07-25 23:38:46.741738] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.271 [2024-07-25 23:38:46.741760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.271 [2024-07-25 23:38:46.741775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.271 [2024-07-25 23:38:46.744938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.271 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:49.271 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:49.271 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.271 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:49.271 [2024-07-25 23:38:46.754277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.271 [2024-07-25 23:38:46.754669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.271 [2024-07-25 23:38:46.754699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.271 [2024-07-25 23:38:46.754716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.271 [2024-07-25 23:38:46.754708] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.271 [2024-07-25 23:38:46.754962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.271 [2024-07-25 23:38:46.755205] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.271 [2024-07-25 23:38:46.755228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.271 [2024-07-25 23:38:46.755242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.271 [2024-07-25 23:38:46.758414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.271 [2024-07-25 23:38:46.767647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.271 [2024-07-25 23:38:46.768070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.271 [2024-07-25 23:38:46.768101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.271 [2024-07-25 23:38:46.768125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.271 [2024-07-25 23:38:46.768359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.271 [2024-07-25 23:38:46.768577] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.271 [2024-07-25 23:38:46.768599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.271 [2024-07-25 23:38:46.768613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.271 [2024-07-25 23:38:46.771729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.271 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.271 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:49.271 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.271 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:49.271 [2024-07-25 23:38:46.781293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.271 [2024-07-25 23:38:46.781772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.271 [2024-07-25 23:38:46.781803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.271 [2024-07-25 23:38:46.781821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.271 [2024-07-25 23:38:46.782087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.271 [2024-07-25 23:38:46.782311] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.271 [2024-07-25 23:38:46.782334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.271 [2024-07-25 23:38:46.782351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.271 [2024-07-25 23:38:46.785573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.271 [2024-07-25 23:38:46.794825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.271 [2024-07-25 23:38:46.795396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.271 [2024-07-25 23:38:46.795441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.271 [2024-07-25 23:38:46.795462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.271 [2024-07-25 23:38:46.795698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.271 [2024-07-25 23:38:46.795913] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.271 [2024-07-25 23:38:46.795935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.272 [2024-07-25 23:38:46.795954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.272 [2024-07-25 23:38:46.799157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.272 Malloc0 00:32:49.272 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.272 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:49.272 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.272 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:49.272 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.272 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:49.272 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.272 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:49.272 [2024-07-25 23:38:46.808388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.272 [2024-07-25 23:38:46.808774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.272 [2024-07-25 23:38:46.808804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7db50 with addr=10.0.0.2, port=4420 00:32:49.272 [2024-07-25 23:38:46.808821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7db50 is same with the state(5) to be set 00:32:49.272 [2024-07-25 23:38:46.809037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7db50 (9): Bad file descriptor 00:32:49.272 [2024-07-25 23:38:46.809264] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:49.272 [2024-07-25 23:38:46.809289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:49.272 [2024-07-25 23:38:46.809305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.272 [2024-07-25 23:38:46.812519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:49.272 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.272 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:49.272 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.272 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:49.272 [2024-07-25 23:38:46.819436] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.272 [2024-07-25 23:38:46.821959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.272 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.272 23:38:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1532785 00:32:49.272 [2024-07-25 23:38:46.897572] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:59.243 00:32:59.243 Latency(us) 00:32:59.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.243 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:59.243 Verification LBA range: start 0x0 length 0x4000 00:32:59.243 Nvme1n1 : 15.01 6655.79 26.00 8666.65 0.00 8328.72 782.79 18252.99 00:32:59.243 =================================================================================================================== 00:32:59.243 Total : 6655.79 26.00 8666.65 0.00 8328.72 782.79 18252.99 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:59.243 rmmod nvme_tcp 00:32:59.243 rmmod nvme_fabrics 00:32:59.243 rmmod nvme_keyring 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1533455 ']' 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1533455 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1533455 ']' 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1533455 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1533455 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1533455' 00:32:59.243 killing process with pid 1533455 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1533455 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1533455 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.243 23:38:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:01.149 00:33:01.149 real 0m22.512s 00:33:01.149 user 1m0.373s 00:33:01.149 sys 0m4.159s 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:01.149 ************************************ 00:33:01.149 END TEST nvmf_bdevperf 00:33:01.149 ************************************ 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.149 ************************************ 00:33:01.149 START TEST nvmf_target_disconnect 00:33:01.149 ************************************ 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:01.149 * Looking for test storage... 00:33:01.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:01.149 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:33:01.150 23:38:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:03.053 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:03.053 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:33:03.053 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:03.053 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:03.053 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:03.053 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:03.053 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:03.053 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:33:03.053 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:03.053 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:33:03.053 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:33:03.053 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:33:03.053 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:33:03.053 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:33:03.053 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:33:03.053 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:03.054 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:03.054 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:03.054 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:03.054 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:03.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:03.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:33:03.054 00:33:03.054 --- 10.0.0.2 ping statistics --- 00:33:03.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.054 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:03.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:03.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:33:03.054 00:33:03.054 --- 10.0.0.1 ping statistics --- 00:33:03.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.054 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:33:03.054 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:03.055 ************************************ 00:33:03.055 START TEST nvmf_target_disconnect_tc1 00:33:03.055 ************************************ 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:03.055 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:03.055 EAL: No free 2048 kB hugepages reported on node 1 00:33:03.314 [2024-07-25 23:39:00.782145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.314 [2024-07-25 23:39:00.782218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c63e0 with addr=10.0.0.2, port=4420 00:33:03.314 [2024-07-25 23:39:00.782256] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:03.314 [2024-07-25 23:39:00.782276] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:03.314 [2024-07-25 23:39:00.782289] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:03.314 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:03.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:03.314 Initializing NVMe Controllers 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:03.314 00:33:03.314 real 0m0.095s 00:33:03.314 user 0m0.036s 00:33:03.314 sys 0m0.053s 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:03.314 ************************************ 00:33:03.314 END TEST nvmf_target_disconnect_tc1 00:33:03.314 ************************************ 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:03.314 ************************************ 00:33:03.314 START TEST nvmf_target_disconnect_tc2 00:33:03.314 ************************************ 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1536649 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1536649 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1536649 ']' 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:03.314 23:39:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:03.314 [2024-07-25 23:39:00.893515] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:03.314 [2024-07-25 23:39:00.893590] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:03.314 EAL: No free 2048 kB hugepages reported on node 1 00:33:03.314 [2024-07-25 23:39:00.931018] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:03.314 [2024-07-25 23:39:00.959589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:03.572 [2024-07-25 23:39:01.052725] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:03.572 [2024-07-25 23:39:01.052774] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:03.572 [2024-07-25 23:39:01.052804] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:03.572 [2024-07-25 23:39:01.052815] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:03.572 [2024-07-25 23:39:01.052825] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:03.572 [2024-07-25 23:39:01.052960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:03.572 [2024-07-25 23:39:01.053024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:03.572 [2024-07-25 23:39:01.053276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:03.572 [2024-07-25 23:39:01.053281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:03.572 Malloc0 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:03.572 [2024-07-25 23:39:01.224307] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.572 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:03.573 [2024-07-25 23:39:01.252563] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:03.573 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.573 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:03.573 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.573 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:03.573 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.573 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1536703 00:33:03.573 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:03.573 23:39:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:03.831 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.747 23:39:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1536649 00:33:05.747 23:39:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 [2024-07-25 23:39:03.277477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 [2024-07-25 23:39:03.277808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 [2024-07-25 23:39:03.278105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Read completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 Write completed with error (sct=0, sc=8) 00:33:05.747 starting I/O failed 00:33:05.747 [2024-07-25 23:39:03.278396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.747 [2024-07-25 23:39:03.278561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.747 [2024-07-25 23:39:03.278593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.747 qpair failed and we were unable to recover it. 00:33:05.747 [2024-07-25 23:39:03.278762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.747 [2024-07-25 23:39:03.278789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.747 qpair failed and we were unable to recover it. 00:33:05.747 [2024-07-25 23:39:03.278954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.747 [2024-07-25 23:39:03.278979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.747 qpair failed and we were unable to recover it. 00:33:05.747 [2024-07-25 23:39:03.279145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.747 [2024-07-25 23:39:03.279173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.747 qpair failed and we were unable to recover it. 00:33:05.747 [2024-07-25 23:39:03.279291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.747 [2024-07-25 23:39:03.279316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.747 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.279445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.279471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.279611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.279636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.279795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.279821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.280000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.280026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.280138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.280165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.280267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.280292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.280425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.280450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.280613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.280639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.280772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.280816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.280964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.280989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.281086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.281112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.281231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.281256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.281363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.281390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.281526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.281568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.281830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.281858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.282008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.282037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.282178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.282211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.282332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.282357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.282499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.282524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.282680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.282705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.282887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.282928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.283096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.283121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.283232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.283257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.283363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.283388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.283488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.283513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.283699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.283724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.283857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.283898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.284042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.284076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.284207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.284233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.284338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.284364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.284506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.284532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.284661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.284686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.284822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.284862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.285023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.285048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.285157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.285182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.285297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.285321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.285468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.285493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.285623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.285663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.285806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.285847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.286026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.286052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.286175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.286201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.286309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.286334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.286488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.286513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.286722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.286751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.286872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.286900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.287026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.287051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.287170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.287196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.287303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.287328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.287495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.287520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.287711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.287736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.748 [2024-07-25 23:39:03.287893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.748 [2024-07-25 23:39:03.287919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.748 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.288101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.288128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.288233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.288258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.288401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.288427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.288559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.288586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.288732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.288760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.288941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.288974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.289104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.289130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.289240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.289265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.289376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.289403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.289531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.289557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.289717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.289742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.289879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.289922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.290092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.290120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.290237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.290263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.290377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.290404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.290537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.290563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.290695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.290721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.290877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.290902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.291067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.291094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.291203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.291228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.291335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.291361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.291489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.291515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.291613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.291638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.291768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.291794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.291898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.291925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.292052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.292104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.292225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.292252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.292366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.292391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.292532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.292559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.292760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.292818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.292974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.293000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.293125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.293152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.293310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.293348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.293494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.293522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.293678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.293705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.293809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.293835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.293966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.293992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.294105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.294131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.294241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.294266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.294400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.294427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.294560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.294585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.294742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.294768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.294895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.294921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.295030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.295057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.749 [2024-07-25 23:39:03.295186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.749 [2024-07-25 23:39:03.295213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.749 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.295322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.295353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.295511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.295537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.295663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.295689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.295823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.295849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.295982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.296008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.296138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.296177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.296285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.296312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.296419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.296446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.296553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.296579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.296708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.296734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.296851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.296889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.297211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.297239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.297346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.297372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.297537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.297579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.297805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.297857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.298014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.298040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.298202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.298227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.298338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.298366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.298498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.298525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.298720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.298746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.298858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.298884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.299016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.299042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.299190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.299216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.299365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.299409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.299563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.299611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.299769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.299795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.299926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.299952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.750 qpair failed and we were unable to recover it. 00:33:05.750 [2024-07-25 23:39:03.300135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.750 [2024-07-25 23:39:03.300165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.752 qpair failed and we were unable to recover it. 00:33:05.752 [2024-07-25 23:39:03.300281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.752 [2024-07-25 23:39:03.300309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.752 qpair failed and we were unable to recover it. 00:33:05.752 [2024-07-25 23:39:03.300453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.752 [2024-07-25 23:39:03.300481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.300627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.300656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.300778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.300807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.300984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.301010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.301169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.301195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.301329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.301354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.301487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.301513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.301647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.301674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.301809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.301834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.301938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.301963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.302121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.302147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.302278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.302307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.302455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.302481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.302640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.302682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.302827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.302855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.303001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.303026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.303162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.303188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.303295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.303321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.303473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.303498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.303626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.303651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.303784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.303810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.303970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.303996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.304100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.304126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.304226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.304252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.304395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.304434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.304582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.304609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.304724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.304750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.304908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.304934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.305036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.305068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.305208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.753 [2024-07-25 23:39:03.305233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.753 qpair failed and we were unable to recover it. 00:33:05.753 [2024-07-25 23:39:03.305366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.305392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.305523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.305548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.305698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.305726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.305881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.305906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.306054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.306101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.306252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.306277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.306545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.306593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.306738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.306767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.306905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.306959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.307142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.307171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.307322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.307366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.307489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.307532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.307711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.307758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.307918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.307943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.308044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.308082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.308202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.308245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.308376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.308403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.308561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.308587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.308717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.308743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.308857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.308882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.309016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.309042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.309158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.309189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.309353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.309378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.309510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.309535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.309670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.309696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.309805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.309831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.309960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.309986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.310111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.310150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.310286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.310313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.310446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.310472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.310581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.310609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.310743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.310770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.310901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.310928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.311068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.311095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.311198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.311224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.311398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.311424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.311579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.311624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.311777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.311821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.311977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.312003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.312157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.312188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.312346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.312389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.312548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.312578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.312801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.312851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.313003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.313029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.313144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.754 [2024-07-25 23:39:03.313169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.754 qpair failed and we were unable to recover it. 00:33:05.754 [2024-07-25 23:39:03.313332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.313358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.313556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.313584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.313735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.313759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.313889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.313919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.314045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.314081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.314231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.314255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.314395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.314421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.314612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.314638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.314744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.314768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.314901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.314925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.315028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.315052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.315208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.315233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.315399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.315424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.315559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.315584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.315714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.315740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.315851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.315877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.316026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.316053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.316217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.316242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.316374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.316399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.316546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.316571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.316703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.316728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.316882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.316910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.317029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.317057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.317188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.317214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.317321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.317346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.317480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.317521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.317637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.317664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.317834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.317861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.318047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.318114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.318235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.318263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.318382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.318412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.318565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.318594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.318871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.318922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.319080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.319107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.319238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.319264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.319479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.319508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.319648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.319676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.319852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.319881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.320017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.320044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.320266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.320292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.320446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.320475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.320626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.320654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.320800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.320829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.320988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.321031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.321230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.321268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.321438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.321465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.321673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.321730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.321910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.321940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.755 [2024-07-25 23:39:03.322104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.755 [2024-07-25 23:39:03.322130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.755 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.322263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.322288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.322423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.322451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.322624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.322652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.322773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.322803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.322931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.322956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.323090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.323116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.323230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.323256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.323363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.323389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.323544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.323578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.323721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.323750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.323896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.323925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.324071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.324109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.324234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.324273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.324418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.324446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.324711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.324764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.324886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.324928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.325044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.325081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.325233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.325260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.325421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.325447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.325621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.325650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.325810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.325872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.326025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.326051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.326163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.326190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.326350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.326376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.326608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.326661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.326906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.326957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.327075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.327120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.327257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.327282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.327393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.327418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.327543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.327572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.327711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.327739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.327869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.327912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.328074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.328100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.328203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.328228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.328331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.328358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.328520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.328548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.328694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.328722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.328829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.328857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.329001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.329029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.329207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.329246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.329408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.329438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.329616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.329659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.329811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.329856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.756 qpair failed and we were unable to recover it. 00:33:05.756 [2024-07-25 23:39:03.329999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.756 [2024-07-25 23:39:03.330037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.330177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.330215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.330357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.330389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.330708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.330768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.330919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.330948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.331118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.331149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.331273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.331301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.331426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.331468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.331616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.331644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.331874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.331925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.332054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.332085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.332218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.332244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.332397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.332425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.332594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.332623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.332760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.332802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.332989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.333015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.333184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.333210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.333306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.333348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.333488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.333516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.333672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.333701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.333869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.333897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.334021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.334046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.334212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.334237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.334366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.334435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.334620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.334674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.334818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.334846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.334992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.335020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.335157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.335183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.757 qpair failed and we were unable to recover it. 00:33:05.757 [2024-07-25 23:39:03.335319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.757 [2024-07-25 23:39:03.335362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.335519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.335544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.335722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.335750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.335894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.335923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.336067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.336124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.336249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.336277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.336379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.336406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.336556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.336584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.336788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.336816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.336941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.336967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.337096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.337122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.337251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.337281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.337452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.337481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.337729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.337779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.337904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.337930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.338069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.338095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.338224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.338250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.338401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.338436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.338554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.338583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.338691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.338719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.338865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.338893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.339050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.339084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.339185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.339212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.339369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.339397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.339537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.339566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.339793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.339849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.340011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.340050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.340222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.340249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.340380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.340430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.340535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.340562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.340825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.340880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.340996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.341022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.341185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.341212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.341373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.341399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.341545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.341574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.341718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.341744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.341863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.758 [2024-07-25 23:39:03.341900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.758 qpair failed and we were unable to recover it. 00:33:05.758 [2024-07-25 23:39:03.342046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.342108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.342268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.342298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.342442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.342508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.342697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.342746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.342867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.342892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.343029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.343054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.343190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.343220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.343454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.343506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.343751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.343801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.343956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.343983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.344130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.344156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.344320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.344348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.344501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.344545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.344699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.344728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.344887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.344912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.345029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.345073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.345217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.345245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.345372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.345397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.345524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.345549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.345686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.345711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.345850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.345884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.346022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.346048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.346162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.346188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.346306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.346334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.346531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.346574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.346699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.346742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.346891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.346918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.347054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.347089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.347225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.347251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.347407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.347436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.347651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.347679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.347857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.347885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.348023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.348048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.348187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.348211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.348338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.348366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.759 [2024-07-25 23:39:03.348537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.759 [2024-07-25 23:39:03.348564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.759 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.348689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.348715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.348881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.348910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.349069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.349094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.349201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.349226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.349376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.349404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.349552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.349579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.349754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.349782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.349910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.349952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.350106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.350131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.350261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.350285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.350411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.350439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.350555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.350597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.350744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.350772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.350887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.350914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.351091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.351129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.351306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.351334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.351516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.351560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.351686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.351715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.351863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.351889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.351997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.352024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.352155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.352184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.352333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.352361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.352536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.352593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.352737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.352765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.352874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.352903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.353042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.353075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.353228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.353253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.353361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.353387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.353555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.353598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.353771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.353799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.353944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.353972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.354124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.354150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.354300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.354327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.354530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.354597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.354757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.760 [2024-07-25 23:39:03.354785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.760 qpair failed and we were unable to recover it. 00:33:05.760 [2024-07-25 23:39:03.354905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.354934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.355093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.355147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.355314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.355341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.355478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.355507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.355663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.355714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.355892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.355921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.356093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.356132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.356256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.356284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.356473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.356516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.356643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.356709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.356901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.356927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.357027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.357052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.357166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.357190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.357344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.357372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.357529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.357554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.357740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.357796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.357982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.358008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.358151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.358177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.358292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.358317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.358457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.358518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.358701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.358728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.358874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.358902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.359056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.359086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.359242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.359267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.761 [2024-07-25 23:39:03.359414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.761 [2024-07-25 23:39:03.359455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.761 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.359632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.359663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.359828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.359855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.360022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.360050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.360180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.360205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.360347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.360372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.360528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.360557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.360734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.360763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.360908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.360936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.361072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.361099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.361233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.361259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.361391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.361416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.361582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.361611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.361743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.361787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.361956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.361984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.362133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.362158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.362272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.362297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.362405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.362430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.362602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.362627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.362819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.362847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.362968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.362996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.762 qpair failed and we were unable to recover it. 00:33:05.762 [2024-07-25 23:39:03.363146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.762 [2024-07-25 23:39:03.363171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.363330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.363356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.363470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.363494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.363641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.363668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.363772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.363799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.363961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.363986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.364114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.364141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.364283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.364310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.364421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.364449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.364599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.364624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.364782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.364810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.364981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.365008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.365159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.365185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.365336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.365365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.365518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.365546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.365693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.365721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.365844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.365869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.365983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.366007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.366134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.366160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.366288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.366313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.366471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.366496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.366650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.366678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.366802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.366843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.366954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.366982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.367135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.367161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.367271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.367296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.367425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.367454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.367635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.367663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.367779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.367807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.367930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.367958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.368110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.368149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.368276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.368314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.368472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.368517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.368667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.368711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.368837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.368863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.369027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.369053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.369202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.369246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.369374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.369418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.369576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.369624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.369862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.369905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.370016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.370046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.763 qpair failed and we were unable to recover it. 00:33:05.763 [2024-07-25 23:39:03.370216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.763 [2024-07-25 23:39:03.370243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.370382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.370408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.370567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.370618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.370765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.370794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.370940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.370968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.371115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.371140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.371290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.371318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.371486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.371514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.371697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.371761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.371873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.371901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.372044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.372079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.372251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.372279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.372446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.372489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.372672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.372735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.372967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.373017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.373219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.373246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.373368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.373397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.373575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.373604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.373814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.373870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.374009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.374038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.374176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.374203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.374316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.374343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.374523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.374553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.374781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.374807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.374969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.374999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.375179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.375210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.375358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.375383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.375558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.375607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.375773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.375826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.375965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.375994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.376132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.376160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.376299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.764 [2024-07-25 23:39:03.376325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.764 qpair failed and we were unable to recover it. 00:33:05.764 [2024-07-25 23:39:03.376488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.376514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.376739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.376799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.376974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.377002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.377134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.377162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.377299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.377326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.377459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.377488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.377693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.377722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.377853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.377884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.378052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.378088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.378236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.378261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.378394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.378438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.378581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.378609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.378754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.378783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.378958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.378988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.379146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.379172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.379274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.379300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.379436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.379479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.379602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.379644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.379825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.379854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.380030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.380067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.380224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.380250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.380479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.380533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.765 [2024-07-25 23:39:03.380731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.765 [2024-07-25 23:39:03.380781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.765 qpair failed and we were unable to recover it. 00:33:05.766 [2024-07-25 23:39:03.380923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.766 [2024-07-25 23:39:03.380951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.766 qpair failed and we were unable to recover it. 00:33:05.766 [2024-07-25 23:39:03.381132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.766 [2024-07-25 23:39:03.381158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.766 qpair failed and we were unable to recover it. 00:33:05.766 [2024-07-25 23:39:03.381289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.766 [2024-07-25 23:39:03.381316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.766 qpair failed and we were unable to recover it. 00:33:05.766 [2024-07-25 23:39:03.381474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.766 [2024-07-25 23:39:03.381503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.766 qpair failed and we were unable to recover it. 00:33:05.766 [2024-07-25 23:39:03.381807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.766 [2024-07-25 23:39:03.381857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.766 qpair failed and we were unable to recover it. 00:33:05.766 [2024-07-25 23:39:03.381975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.766 [2024-07-25 23:39:03.382002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.766 qpair failed and we were unable to recover it. 00:33:05.766 [2024-07-25 23:39:03.382179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.766 [2024-07-25 23:39:03.382205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.766 qpair failed and we were unable to recover it. 00:33:05.766 [2024-07-25 23:39:03.382330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.766 [2024-07-25 23:39:03.382356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.766 qpair failed and we were unable to recover it. 00:33:05.766 [2024-07-25 23:39:03.382486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.766 [2024-07-25 23:39:03.382528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.766 qpair failed and we were unable to recover it. 00:33:05.766 [2024-07-25 23:39:03.382675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.766 [2024-07-25 23:39:03.382704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.766 qpair failed and we were unable to recover it. 00:33:05.766 [2024-07-25 23:39:03.382857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.766 [2024-07-25 23:39:03.382891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.766 qpair failed and we were unable to recover it. 00:33:05.766 [2024-07-25 23:39:03.383046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.766 [2024-07-25 23:39:03.383105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.767 qpair failed and we were unable to recover it. 00:33:05.767 [2024-07-25 23:39:03.383244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.767 [2024-07-25 23:39:03.383270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.767 qpair failed and we were unable to recover it. 00:33:05.767 [2024-07-25 23:39:03.383410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.767 [2024-07-25 23:39:03.383437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.767 qpair failed and we were unable to recover it. 00:33:05.767 [2024-07-25 23:39:03.383615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.767 [2024-07-25 23:39:03.383644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.767 qpair failed and we were unable to recover it. 00:33:05.767 [2024-07-25 23:39:03.383811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.767 [2024-07-25 23:39:03.383839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.767 qpair failed and we were unable to recover it. 00:33:05.767 [2024-07-25 23:39:03.384018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.767 [2024-07-25 23:39:03.384043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.767 qpair failed and we were unable to recover it. 00:33:05.767 [2024-07-25 23:39:03.384160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.767 [2024-07-25 23:39:03.384186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.767 qpair failed and we were unable to recover it. 00:33:05.767 [2024-07-25 23:39:03.384326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.767 [2024-07-25 23:39:03.384353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.767 qpair failed and we were unable to recover it. 00:33:05.767 [2024-07-25 23:39:03.384546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.767 [2024-07-25 23:39:03.384572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.767 qpair failed and we were unable to recover it. 00:33:05.767 [2024-07-25 23:39:03.384680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.767 [2024-07-25 23:39:03.384724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.767 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-25 23:39:03.384864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.768 [2024-07-25 23:39:03.384892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-25 23:39:03.385039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.768 [2024-07-25 23:39:03.385073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-25 23:39:03.385208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.768 [2024-07-25 23:39:03.385249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-25 23:39:03.385429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.768 [2024-07-25 23:39:03.385458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-25 23:39:03.385602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.768 [2024-07-25 23:39:03.385627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-25 23:39:03.385758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.768 [2024-07-25 23:39:03.385785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.768 qpair failed and we were unable to recover it. 00:33:05.768 [2024-07-25 23:39:03.385962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.769 [2024-07-25 23:39:03.385988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.769 qpair failed and we were unable to recover it. 00:33:05.769 [2024-07-25 23:39:03.386123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.769 [2024-07-25 23:39:03.386150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.769 qpair failed and we were unable to recover it. 00:33:05.769 [2024-07-25 23:39:03.386308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.769 [2024-07-25 23:39:03.386337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.769 qpair failed and we were unable to recover it. 00:33:05.769 [2024-07-25 23:39:03.386478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.769 [2024-07-25 23:39:03.386506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.769 qpair failed and we were unable to recover it. 00:33:05.769 [2024-07-25 23:39:03.386656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.769 [2024-07-25 23:39:03.386681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.769 qpair failed and we were unable to recover it. 00:33:05.769 [2024-07-25 23:39:03.386839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.769 [2024-07-25 23:39:03.386883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.770 qpair failed and we were unable to recover it. 00:33:05.770 [2024-07-25 23:39:03.387049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.770 [2024-07-25 23:39:03.387080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.770 qpair failed and we were unable to recover it. 00:33:05.770 [2024-07-25 23:39:03.387242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.770 [2024-07-25 23:39:03.387267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.770 qpair failed and we were unable to recover it. 00:33:05.770 [2024-07-25 23:39:03.387412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.770 [2024-07-25 23:39:03.387440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.770 qpair failed and we were unable to recover it. 00:33:05.770 [2024-07-25 23:39:03.387613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.770 [2024-07-25 23:39:03.387638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.770 qpair failed and we were unable to recover it. 00:33:05.770 [2024-07-25 23:39:03.387772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.770 [2024-07-25 23:39:03.387798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.770 qpair failed and we were unable to recover it. 00:33:05.770 [2024-07-25 23:39:03.387972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.770 [2024-07-25 23:39:03.388001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.770 qpair failed and we were unable to recover it. 00:33:05.770 [2024-07-25 23:39:03.388109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.770 [2024-07-25 23:39:03.388138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.770 qpair failed and we were unable to recover it. 00:33:05.770 [2024-07-25 23:39:03.388296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.771 [2024-07-25 23:39:03.388321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.771 qpair failed and we were unable to recover it. 00:33:05.771 [2024-07-25 23:39:03.388431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.771 [2024-07-25 23:39:03.388457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.771 qpair failed and we were unable to recover it. 00:33:05.771 [2024-07-25 23:39:03.388641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.771 [2024-07-25 23:39:03.388669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.771 qpair failed and we were unable to recover it. 00:33:05.771 [2024-07-25 23:39:03.388796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.771 [2024-07-25 23:39:03.388822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.771 qpair failed and we were unable to recover it. 00:33:05.771 [2024-07-25 23:39:03.388949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.771 [2024-07-25 23:39:03.388974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.771 qpair failed and we were unable to recover it. 00:33:05.771 [2024-07-25 23:39:03.389110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.771 [2024-07-25 23:39:03.389139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.771 qpair failed and we were unable to recover it. 00:33:05.771 [2024-07-25 23:39:03.389319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.771 [2024-07-25 23:39:03.389344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.771 qpair failed and we were unable to recover it. 00:33:05.771 [2024-07-25 23:39:03.389493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.771 [2024-07-25 23:39:03.389521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.771 qpair failed and we were unable to recover it. 00:33:05.771 [2024-07-25 23:39:03.389669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.771 [2024-07-25 23:39:03.389697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.771 qpair failed and we were unable to recover it. 00:33:05.771 [2024-07-25 23:39:03.389875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.771 [2024-07-25 23:39:03.389901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.771 qpair failed and we were unable to recover it. 00:33:05.771 [2024-07-25 23:39:03.390043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.771 [2024-07-25 23:39:03.390084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.771 qpair failed and we were unable to recover it. 00:33:05.771 [2024-07-25 23:39:03.390197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.771 [2024-07-25 23:39:03.390226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.771 qpair failed and we were unable to recover it. 00:33:05.771 [2024-07-25 23:39:03.390374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.771 [2024-07-25 23:39:03.390399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.771 qpair failed and we were unable to recover it. 00:33:05.771 [2024-07-25 23:39:03.390523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.771 [2024-07-25 23:39:03.390564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.771 qpair failed and we were unable to recover it. 00:33:05.771 [2024-07-25 23:39:03.390680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.772 [2024-07-25 23:39:03.390708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.772 qpair failed and we were unable to recover it. 00:33:05.772 [2024-07-25 23:39:03.390860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.772 [2024-07-25 23:39:03.390886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.772 qpair failed and we were unable to recover it. 00:33:05.772 [2024-07-25 23:39:03.391120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.772 [2024-07-25 23:39:03.391164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.772 qpair failed and we were unable to recover it. 00:33:05.772 [2024-07-25 23:39:03.391284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.772 [2024-07-25 23:39:03.391314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.772 qpair failed and we were unable to recover it. 00:33:05.772 [2024-07-25 23:39:03.391436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.772 [2024-07-25 23:39:03.391462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.772 qpair failed and we were unable to recover it. 00:33:05.772 [2024-07-25 23:39:03.391596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.772 [2024-07-25 23:39:03.391622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.772 qpair failed and we were unable to recover it. 00:33:05.772 [2024-07-25 23:39:03.391757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.772 [2024-07-25 23:39:03.391785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.772 qpair failed and we were unable to recover it. 00:33:05.772 [2024-07-25 23:39:03.391899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.772 [2024-07-25 23:39:03.391928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.772 qpair failed and we were unable to recover it. 00:33:05.772 [2024-07-25 23:39:03.392041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.773 [2024-07-25 23:39:03.392082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.774 qpair failed and we were unable to recover it. 00:33:05.774 [2024-07-25 23:39:03.392239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.774 [2024-07-25 23:39:03.392266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.774 qpair failed and we were unable to recover it. 00:33:05.774 [2024-07-25 23:39:03.392408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.774 [2024-07-25 23:39:03.392435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.774 qpair failed and we were unable to recover it. 00:33:05.774 [2024-07-25 23:39:03.392582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.774 [2024-07-25 23:39:03.392647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.774 qpair failed and we were unable to recover it. 00:33:05.774 [2024-07-25 23:39:03.392763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.774 [2024-07-25 23:39:03.392793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.774 qpair failed and we were unable to recover it. 00:33:05.774 [2024-07-25 23:39:03.392931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.774 [2024-07-25 23:39:03.392958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.774 qpair failed and we were unable to recover it. 00:33:05.774 [2024-07-25 23:39:03.393116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.393160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.393331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.393359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.393513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.393538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.393674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.393699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.393855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.393897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.394057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.394088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.394193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.394219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.394404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.394432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.394590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.394615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.394745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.394791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.394947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.394975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.395128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.395155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.395329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.395357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.395468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.395496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.395627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.395652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.395788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.395813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.395922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.395948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.396078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.396104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.396235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.396260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.396408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.396433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.396627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.396652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.396792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.396817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.396925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.775 [2024-07-25 23:39:03.396950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.775 qpair failed and we were unable to recover it. 00:33:05.775 [2024-07-25 23:39:03.397090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.776 [2024-07-25 23:39:03.397116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.776 qpair failed and we were unable to recover it. 00:33:05.776 [2024-07-25 23:39:03.397244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.776 [2024-07-25 23:39:03.397269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.776 qpair failed and we were unable to recover it. 00:33:05.776 [2024-07-25 23:39:03.397427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.776 [2024-07-25 23:39:03.397468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.776 qpair failed and we were unable to recover it. 00:33:05.776 [2024-07-25 23:39:03.397629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.776 [2024-07-25 23:39:03.397655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.776 qpair failed and we were unable to recover it. 00:33:05.776 [2024-07-25 23:39:03.397786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.776 [2024-07-25 23:39:03.397811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.776 qpair failed and we were unable to recover it. 00:33:05.776 [2024-07-25 23:39:03.397980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.776 [2024-07-25 23:39:03.398008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.776 qpair failed and we were unable to recover it. 00:33:05.776 [2024-07-25 23:39:03.398145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.776 [2024-07-25 23:39:03.398171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.776 qpair failed and we were unable to recover it. 00:33:05.776 [2024-07-25 23:39:03.398306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.776 [2024-07-25 23:39:03.398331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.776 qpair failed and we were unable to recover it. 00:33:05.776 [2024-07-25 23:39:03.398477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.776 [2024-07-25 23:39:03.398505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.776 qpair failed and we were unable to recover it. 00:33:05.777 [2024-07-25 23:39:03.398658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.777 [2024-07-25 23:39:03.398683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.777 qpair failed and we were unable to recover it. 00:33:05.777 [2024-07-25 23:39:03.398816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.777 [2024-07-25 23:39:03.398856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.777 qpair failed and we were unable to recover it. 00:33:05.777 [2024-07-25 23:39:03.398976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.777 [2024-07-25 23:39:03.399004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.777 qpair failed and we were unable to recover it. 00:33:05.777 [2024-07-25 23:39:03.399154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.777 [2024-07-25 23:39:03.399180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.777 qpair failed and we were unable to recover it. 00:33:05.777 [2024-07-25 23:39:03.399318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.777 [2024-07-25 23:39:03.399343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.777 qpair failed and we were unable to recover it. 00:33:05.777 [2024-07-25 23:39:03.399512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.777 [2024-07-25 23:39:03.399537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.777 qpair failed and we were unable to recover it. 00:33:05.777 [2024-07-25 23:39:03.399666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.777 [2024-07-25 23:39:03.399691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.777 qpair failed and we were unable to recover it. 00:33:05.778 [2024-07-25 23:39:03.399831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.778 [2024-07-25 23:39:03.399873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.778 qpair failed and we were unable to recover it. 00:33:05.778 [2024-07-25 23:39:03.400014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.778 [2024-07-25 23:39:03.400042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.778 qpair failed and we were unable to recover it. 00:33:05.778 [2024-07-25 23:39:03.400177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.778 [2024-07-25 23:39:03.400203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.778 qpair failed and we were unable to recover it. 00:33:05.778 [2024-07-25 23:39:03.400351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.778 [2024-07-25 23:39:03.400377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.778 qpair failed and we were unable to recover it. 00:33:05.778 [2024-07-25 23:39:03.400525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.778 [2024-07-25 23:39:03.400554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.778 qpair failed and we were unable to recover it. 00:33:05.778 [2024-07-25 23:39:03.400678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.778 [2024-07-25 23:39:03.400705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.778 qpair failed and we were unable to recover it. 00:33:05.778 [2024-07-25 23:39:03.400833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.778 [2024-07-25 23:39:03.400859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.778 qpair failed and we were unable to recover it. 00:33:05.778 [2024-07-25 23:39:03.400964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.778 [2024-07-25 23:39:03.400990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.778 qpair failed and we were unable to recover it. 00:33:05.778 [2024-07-25 23:39:03.401125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.778 [2024-07-25 23:39:03.401150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.778 qpair failed and we were unable to recover it. 00:33:05.778 [2024-07-25 23:39:03.401297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.778 [2024-07-25 23:39:03.401355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.778 qpair failed and we were unable to recover it. 00:33:05.778 [2024-07-25 23:39:03.401520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.778 [2024-07-25 23:39:03.401553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.778 qpair failed and we were unable to recover it. 00:33:05.778 [2024-07-25 23:39:03.401725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.778 [2024-07-25 23:39:03.401751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.778 qpair failed and we were unable to recover it. 00:33:05.778 [2024-07-25 23:39:03.401963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.779 [2024-07-25 23:39:03.401989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.779 qpair failed and we were unable to recover it. 00:33:05.779 [2024-07-25 23:39:03.402150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.779 [2024-07-25 23:39:03.402181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.779 qpair failed and we were unable to recover it. 00:33:05.779 [2024-07-25 23:39:03.402312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.779 [2024-07-25 23:39:03.402338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.779 qpair failed and we were unable to recover it. 00:33:05.779 [2024-07-25 23:39:03.402440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.779 [2024-07-25 23:39:03.402467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.779 qpair failed and we were unable to recover it. 00:33:05.779 [2024-07-25 23:39:03.402646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.779 [2024-07-25 23:39:03.402676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.779 qpair failed and we were unable to recover it. 00:33:05.779 [2024-07-25 23:39:03.402858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.779 [2024-07-25 23:39:03.402884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.779 qpair failed and we were unable to recover it. 00:33:05.779 [2024-07-25 23:39:03.403070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.779 [2024-07-25 23:39:03.403101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.779 qpair failed and we were unable to recover it. 00:33:05.779 [2024-07-25 23:39:03.403283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.779 [2024-07-25 23:39:03.403312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.779 qpair failed and we were unable to recover it. 00:33:05.779 [2024-07-25 23:39:03.403438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.779 [2024-07-25 23:39:03.403463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.779 qpair failed and we were unable to recover it. 00:33:05.779 [2024-07-25 23:39:03.403596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.779 [2024-07-25 23:39:03.403621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.779 qpair failed and we were unable to recover it. 00:33:05.779 [2024-07-25 23:39:03.403809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.780 [2024-07-25 23:39:03.403837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.780 qpair failed and we were unable to recover it. 00:33:05.780 [2024-07-25 23:39:03.403988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.780 [2024-07-25 23:39:03.404014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.780 qpair failed and we were unable to recover it. 00:33:05.780 [2024-07-25 23:39:03.404179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.780 [2024-07-25 23:39:03.404210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.780 qpair failed and we were unable to recover it. 00:33:05.780 [2024-07-25 23:39:03.404344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.780 [2024-07-25 23:39:03.404371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.780 qpair failed and we were unable to recover it. 00:33:05.780 [2024-07-25 23:39:03.404508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.780 [2024-07-25 23:39:03.404534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.780 qpair failed and we were unable to recover it. 00:33:05.780 [2024-07-25 23:39:03.404697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.780 [2024-07-25 23:39:03.404723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.780 qpair failed and we were unable to recover it. 00:33:05.780 [2024-07-25 23:39:03.404835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.780 [2024-07-25 23:39:03.404864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.780 qpair failed and we were unable to recover it. 00:33:05.780 [2024-07-25 23:39:03.405012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.780 [2024-07-25 23:39:03.405038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.780 qpair failed and we were unable to recover it. 00:33:05.780 [2024-07-25 23:39:03.405177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.780 [2024-07-25 23:39:03.405221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.780 qpair failed and we were unable to recover it. 00:33:05.780 [2024-07-25 23:39:03.405379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.780 [2024-07-25 23:39:03.405405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.780 qpair failed and we were unable to recover it. 00:33:05.780 [2024-07-25 23:39:03.405565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.780 [2024-07-25 23:39:03.405591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.780 qpair failed and we were unable to recover it. 00:33:05.780 [2024-07-25 23:39:03.405748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.780 [2024-07-25 23:39:03.405779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.780 qpair failed and we were unable to recover it. 00:33:05.780 [2024-07-25 23:39:03.405890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.780 [2024-07-25 23:39:03.405919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.780 qpair failed and we were unable to recover it. 00:33:05.780 [2024-07-25 23:39:03.406071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.780 [2024-07-25 23:39:03.406098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.780 qpair failed and we were unable to recover it. 00:33:05.780 [2024-07-25 23:39:03.406278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.780 [2024-07-25 23:39:03.406309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.780 qpair failed and we were unable to recover it. 00:33:05.780 [2024-07-25 23:39:03.406474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.780 [2024-07-25 23:39:03.406502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.780 qpair failed and we were unable to recover it. 00:33:05.780 [2024-07-25 23:39:03.406632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.780 [2024-07-25 23:39:03.406658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.780 qpair failed and we were unable to recover it. 00:33:05.780 [2024-07-25 23:39:03.406789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.406815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.406968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.406996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.407142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.407168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.407295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.407320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.407510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.407538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.407702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.407727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.407825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.407851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.407999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.408028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.408215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.408242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.408389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.408417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.408528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.408556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.408689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.408718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.408853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.408897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.409044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.409084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.409269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.409297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.409400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.409442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.409559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.409589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.409747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.409773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.409947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.409976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.410119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.410147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.410296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.410322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.410425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.410451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.410616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.410643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.410755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.410782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.410919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.410961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.411151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.411181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.411374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.411399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.411560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.411613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.411760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.411789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.411973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.412000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.412105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.412147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.412285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.412312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.412417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.412444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.412583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.412609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.412799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.412828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.412975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.413000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.413164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.413207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.413364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.413391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.413495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.413521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.413655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.413683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.413859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.413888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.414046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.414078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.414213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.414256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.414387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.414416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.781 [2024-07-25 23:39:03.414549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.781 [2024-07-25 23:39:03.414575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.781 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.414789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.414816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.414952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.414981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.415130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.415157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.415266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.415293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.415480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.415506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.415662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.415688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.415844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.415878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.416036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.416066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.416179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.416205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.416311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.416337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.416458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.416486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.416607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.416633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.416738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.416764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.416950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.416977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.417117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.417144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.417248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.417274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.417440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.417466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.417611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.417636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.417794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.417819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.417977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.418006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.418162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.418189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.418294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.418324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.418481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.418510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.418644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.418670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.418770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.418797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.418956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.418985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.419109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.419136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.419239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.419266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.419407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.419432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.419563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.419588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.419693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.419720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.419880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.419910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.420068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.420095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.420235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.420261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.420386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.420415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.420548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.420574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.420711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.420737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.420870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.420896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.421110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.421137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.421267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.421293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.421450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.421492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.421619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.421646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.421775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.421802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.421962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.422005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.422134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.422160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.422258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.422284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.422459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.422492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.422674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.422700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.422806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.422850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.422966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.422995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.423128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.423155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.423294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.423321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.423479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.423508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.782 qpair failed and we were unable to recover it. 00:33:05.782 [2024-07-25 23:39:03.423630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.782 [2024-07-25 23:39:03.423656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.423768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.423797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.423951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.423977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.424112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.424138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.424244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.424270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.424437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.424463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.424636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.424662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.424769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.424795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.424983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.425011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.425140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.425167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.425268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.425294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.425424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.425450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.425556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.425582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.425712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.425737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.425923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.425963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.426105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.426133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.426241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.426269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.426372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.426399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.426555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.426581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.426736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.426783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.426901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.426931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.427068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.427094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.427193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.427218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.427324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.427366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.427515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.427541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.427670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.427696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.427811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.427837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.427972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.427997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.428118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.428144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.428261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.428288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.428396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.428422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.428577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.428621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.428767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.428814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.428939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.428970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.429089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.429116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.429253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.429280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.429397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.429423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.429581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.429607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.429768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.429798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.429928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.429953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.430081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.430119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.430256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.430282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.430391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.430416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.430524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.430550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.430710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.430739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.430873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.430899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.431065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.431110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.431242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.431268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.431400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.431426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.431570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.431613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.431785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.431814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.431996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.432022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.432149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.432188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.432324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.432381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.432538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.432565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.432705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.432747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.432861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.432889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.433048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.433082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.433181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.433206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.433312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.783 [2024-07-25 23:39:03.433353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.783 qpair failed and we were unable to recover it. 00:33:05.783 [2024-07-25 23:39:03.433516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.433546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.433650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.433677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.433788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.433814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.433946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.433973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.434105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.434131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.434243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.434270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.434384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.434410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.434542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.434569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.434693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.434722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.434858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.434885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.434995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.435021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.435188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.435214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.435314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.435340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.435477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.435503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.435695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.435725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.435850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.435876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.435975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.436001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.436191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.436230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.436376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.436403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.436552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.436581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.436761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.436809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.436958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.436984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.437126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.437152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.437287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.437312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.437495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.437521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.437658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.437684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.437840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.437870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.438026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.438052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.438198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.438224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.438382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.438430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.438589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.438615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.438722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.438749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.438848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.438875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.439010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.439036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.439144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.439171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.439301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.439328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.439461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.439487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.439639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.439672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.439840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.439871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.439999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.440025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.440142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.440176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.440330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.440373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.440510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.440535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.440695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.440739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.440877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.440905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.441031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.441056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.441199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.441225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.441403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.441431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.441579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.441605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.441715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.441740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.441980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.442011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.442235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.442262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.442413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.442441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.442602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.442649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.442836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.442862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.443044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.443078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.443213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.443240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.443344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.443371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.784 qpair failed and we were unable to recover it. 00:33:05.784 [2024-07-25 23:39:03.443536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.784 [2024-07-25 23:39:03.443578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.443770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.443797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.443934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.443960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.444105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.444131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.444241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.444267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.444445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.444470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.444602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.444628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.444760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.444785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.444939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.444965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.445122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.445161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.445266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.445293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.445407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.445433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.445565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.445592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.445705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.445731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.445943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.445970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.446149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.446176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.446286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.446311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.446414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.446440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.446545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.446571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.446743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.446772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.446927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.446954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.447094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.447122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.447276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.447334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.447511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.447538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.447721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.447750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.447896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.447924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.448082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.448108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.448214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.448239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.448399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.448426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.448547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.448572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.448681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.448709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.448835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.448864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.448994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.449020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.449142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.449168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.449277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.449302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.449465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.449491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.449668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.449714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.449833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.449862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.449983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.450008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.450151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.450178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.450315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.450357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.450482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.450506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.450634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.450660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.450812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.450839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.450999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.451025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.451134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.451159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.451305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.451346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.451478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.451503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.451636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.451661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.451794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.451818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.451934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.451958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.452092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.452131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.452274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.785 [2024-07-25 23:39:03.452301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.785 qpair failed and we were unable to recover it. 00:33:05.785 [2024-07-25 23:39:03.452417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.452444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.452560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.452586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.452762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.452789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.452928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.452953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.453067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.453094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.453241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.453279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.453423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.453450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.453629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.453658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.453806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.453831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.453963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.453989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.454170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.454198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.454333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.454375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.454528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.454554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.454658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.454682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.454782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.454808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.454917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.454941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.455069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.455094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.455232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.455256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.455426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.455451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.455565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.455590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.455696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.455722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.455819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.455845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.455979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.456004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.456192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.456225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.456391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.456426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.456555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.456583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.456718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.456744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.456898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.456925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.457091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.457118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.457252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.457278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.457395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.457421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.457597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.457625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.457764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.457793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.457923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.457949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.458096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.458134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:05.786 [2024-07-25 23:39:03.458246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.786 [2024-07-25 23:39:03.458274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:05.786 qpair failed and we were unable to recover it. 00:33:06.056 [2024-07-25 23:39:03.458412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.056 [2024-07-25 23:39:03.458438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.056 qpair failed and we were unable to recover it. 00:33:06.056 [2024-07-25 23:39:03.458602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.056 [2024-07-25 23:39:03.458628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.056 qpair failed and we were unable to recover it. 00:33:06.056 [2024-07-25 23:39:03.458726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.056 [2024-07-25 23:39:03.458752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.056 qpair failed and we were unable to recover it. 00:33:06.056 [2024-07-25 23:39:03.458894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.056 [2024-07-25 23:39:03.458923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.056 qpair failed and we were unable to recover it. 00:33:06.056 [2024-07-25 23:39:03.459054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.056 [2024-07-25 23:39:03.459089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.056 qpair failed and we were unable to recover it. 00:33:06.056 [2024-07-25 23:39:03.459233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.056 [2024-07-25 23:39:03.459261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.056 qpair failed and we were unable to recover it. 00:33:06.056 [2024-07-25 23:39:03.459415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.056 [2024-07-25 23:39:03.459445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.056 qpair failed and we were unable to recover it. 00:33:06.056 [2024-07-25 23:39:03.459596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.056 [2024-07-25 23:39:03.459624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.056 qpair failed and we were unable to recover it. 00:33:06.056 [2024-07-25 23:39:03.459751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.056 [2024-07-25 23:39:03.459779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.056 qpair failed and we were unable to recover it. 00:33:06.056 [2024-07-25 23:39:03.459926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.056 [2024-07-25 23:39:03.459965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.056 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.460114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.460144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.460296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.460322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.460509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.460553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.460678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.460722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.460876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.460926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.461073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.461101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.461206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.461232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.461386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.461414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.461523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.461551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.461666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.461694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.461815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.461843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.461957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.461985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.462110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.462136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.462243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.462269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.462384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.462410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.462536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.462561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.462749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.462777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.462969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.462997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.463143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.463170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.463295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.463321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.463428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.463452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.463578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.463604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.463735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.463763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.463925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.463953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.464119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.464146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.464282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.464308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.464466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.464491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.464637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.464665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.464776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.464805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.465019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.465047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.465181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.057 [2024-07-25 23:39:03.465206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.057 qpair failed and we were unable to recover it. 00:33:06.057 [2024-07-25 23:39:03.465352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.465380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.465517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.465551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.465714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.465743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.465889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.465916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.466120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.466159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.466318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.466363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.466520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.466564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.466746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.466796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.466922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.466948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.467118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.467145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.467304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.467333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.467529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.467573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.467696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.467739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.467871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.467898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.468009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.468034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.468173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.468201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.468317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.468345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.468489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.468517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.468638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.468666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.468780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.468808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.468922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.468950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.469108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.469147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.469290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.469321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.469468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.469498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.469670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.469698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.469866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.469895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.470036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.470076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.470202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.470232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.470403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.470431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.470573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.470600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.470719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.470747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.058 [2024-07-25 23:39:03.470892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.058 [2024-07-25 23:39:03.470920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.058 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.471067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.471111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.471214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.471239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.471403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.471430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.471599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.471627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.471766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.471794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.471949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.471974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.472110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.472136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.472242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.472267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.472421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.472466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.472644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.472672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.472814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.472841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.472997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.473022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.473160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.473185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.473283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.473308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.473427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.473455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.473564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.473592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.473696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.473723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.473931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.473956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.474088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.474114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.474271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.474296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.474423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.474450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.474620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.474648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.474783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.474825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.474950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.474979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.475095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.475139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.475276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.475302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.475463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.475509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.475654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.475683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.475797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.475825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.475991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.476017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.476153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.476178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.059 qpair failed and we were unable to recover it. 00:33:06.059 [2024-07-25 23:39:03.476277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.059 [2024-07-25 23:39:03.476318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.476484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.476512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.476626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.476654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.476839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.476868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.477017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.477045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.477194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.477220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.477374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.477417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.477611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.477661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.477813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.477856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.477984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.478010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.478120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.478146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.478251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.478276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.478422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.478447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.478603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.478629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.478775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.478800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.478939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.478965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.479123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.479149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.479310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.479336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.479504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.479535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.479719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.479762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.479899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.479925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.480019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.480049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.480163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.480189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.480288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.480313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.480425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.480450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.480584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.480609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.480740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.480765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.480886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.480913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.481031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.481084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.481231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.481258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.481394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.481420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.060 [2024-07-25 23:39:03.481531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.060 [2024-07-25 23:39:03.481557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.060 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.481689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.481719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.481898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.481924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.482027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.482054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.482195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.482223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.482335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.482364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.482501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.482529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.482674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.482702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.482850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.482876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.482983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.483008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.483135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.483165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.483279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.483308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.483461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.483489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.483633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.483662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.483838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.483888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.484020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.484046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.484161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.484186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.484338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.484366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.484535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.484564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.484729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.484773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.484903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.484929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.485028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.485054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.485220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.485264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.485392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.485435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.485538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.485565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.485695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.485721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.485855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.485881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.486028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.486072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.486197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.486242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.486391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.486419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.486552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.486581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.486722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.061 [2024-07-25 23:39:03.486749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.061 qpair failed and we were unable to recover it. 00:33:06.061 [2024-07-25 23:39:03.486866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.486894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.487046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.487086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.487223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.487250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.487387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.487419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.487620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.487649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.487771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.487800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.487962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.487990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.488145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.488172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.488279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.488304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.488472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.488502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.488719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.488767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.488899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.488947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.489101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.489126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.489281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.489306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.489458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.489486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.489612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.489638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.489852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.489903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.490015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.490043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.490206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.490231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.490428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.490474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.490579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.490607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.490749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.490777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.490935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.490991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.491147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.491176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.491331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.491362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.062 [2024-07-25 23:39:03.491566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.062 [2024-07-25 23:39:03.491609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.062 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.491762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.491805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.491919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.491944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.492127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.492171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.492288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.492317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.492468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.492494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.492654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.492680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.492782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.492808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.492915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.492940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.493078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.493106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.493239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.493264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.493381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.493407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.493535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.493561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.493682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.493708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.493813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.493838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.493974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.493999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.494102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.494128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.494235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.494260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.494404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.494432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.494567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.494593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.494758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.494786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.494892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.494920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.495072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.495115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.495248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.495274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.495429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.495457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.495633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.495661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.495772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.495800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.495939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.495967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.496117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.496143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.496244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.496270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.496399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.496425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.496597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.063 [2024-07-25 23:39:03.496625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.063 qpair failed and we were unable to recover it. 00:33:06.063 [2024-07-25 23:39:03.496801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.496829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.497006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.497032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.497174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.497200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.497295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.497320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.497487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.497512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.497621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.497646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.497805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.497850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.497975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.498000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.498156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.498182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.498284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.498309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.498431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.498473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.498616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.498644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.498757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.498785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.498894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.498922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.499115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.499140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.499239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.499264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.499367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.499392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.499575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.499603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.499777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.499806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.499937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.499962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.500095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.500121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.500252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.500278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.500389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.500414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.500516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.500542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.500714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.500739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.500888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.500916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.501071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.501113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.501223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.501249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.501402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.501429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.501570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.501598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.501708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.501736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.501875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.064 [2024-07-25 23:39:03.501904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.064 qpair failed and we were unable to recover it. 00:33:06.064 [2024-07-25 23:39:03.502044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.502078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.502194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.502223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.502337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.502368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.502534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.502563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.502669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.502697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.502881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.502941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.503090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.503117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.503299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.503345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.503560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.503603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.503731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.503776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.503914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.503940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.504075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.504105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.504217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.504246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.504397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.504425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.504541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.504569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.504714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.504743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.504866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.504909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.505042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.505075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.505256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.505300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.505452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.505494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.505650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.505693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.505827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.505853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.505993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.506020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.506143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.506190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.506342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.506374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.506528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.506554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.506746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.506796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.506919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.506945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.507056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.507093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.507253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.507278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.507420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.507448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.065 [2024-07-25 23:39:03.507611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.065 [2024-07-25 23:39:03.507640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.065 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.507812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.507840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.508009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.508037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.508188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.508227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.508431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.508473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.508652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.508682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.508851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.508908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.509027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.509053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.509190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.509215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.509348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.509378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.509532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.509560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.509674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.509702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.509875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.509904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.510042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.510082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.510187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.510212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.510314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.510357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.510487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.510528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.510670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.510698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.510810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.510838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.510976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.511003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.511121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.511148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.511296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.511324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.511507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.511558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.511726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.511755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.511865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.511898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.512057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.512089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.512205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.512231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.512379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.512407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.512538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.512581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.512698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.512726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.512867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.512895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.513057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.513088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.066 [2024-07-25 23:39:03.513202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.066 [2024-07-25 23:39:03.513227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.066 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.513333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.513358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.513518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.513547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.513677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.513702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.513858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.513886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.514015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.514043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.514185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.514211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.514320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.514374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.514547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.514575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.514726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.514756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.514904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.514933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.515087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.515113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.515258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.515284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.515413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.515442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.515651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.515679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.515822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.515851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.515986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.516011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.516159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.516185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.516318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.516343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.516448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.516473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.516605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.516633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.516785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.516811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.516932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.516960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.517136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.517162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.517276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.517301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.517449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.517478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.517625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.517654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.517822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.067 [2024-07-25 23:39:03.517850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.067 qpair failed and we were unable to recover it. 00:33:06.067 [2024-07-25 23:39:03.518029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.518057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.518228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.518253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.518367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.518392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.518549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.518575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.518696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.518724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.518867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.518899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.519023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.519051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.519197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.519236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.519375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.519420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.519598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.519642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.519825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.519869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.520010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.520036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.520181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.520207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.520335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.520388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.520571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.520614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.520786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.520837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.520978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.521006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.521180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.521206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.521337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.521380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.521559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.521608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.521750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.521778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.521923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.521951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.522112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.522141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.522294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.522342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.522459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.522502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.522631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.522673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.522803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.522829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.522965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.522992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.523110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.523137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.523301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.523327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.523472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.523498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.068 [2024-07-25 23:39:03.523624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.068 [2024-07-25 23:39:03.523652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.068 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.523779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.523807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.523942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.523970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.524106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.524134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.524280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.524324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.524469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.524512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.524701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.524744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.524872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.524899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.525028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.525054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.525251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.525280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.525427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.525455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.525590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.525618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.525861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.525908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.526054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.526091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.526221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.526247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.526371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.526401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.526542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.526571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.526709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.526737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.526910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.526940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.527069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.527096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.527250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.527295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.527430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.527488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.527629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.527672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.527801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.527835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.527985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.528011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.528206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.528250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.528441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.528470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.528635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.528660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.528795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.528821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.528937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.528971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.529134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.529165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.529280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.529309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.069 qpair failed and we were unable to recover it. 00:33:06.069 [2024-07-25 23:39:03.529456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.069 [2024-07-25 23:39:03.529481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.529642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.529670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.529779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.529808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.529935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.529963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.530134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.530162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.530268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.530296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.530421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.530449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.530646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.530691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.530851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.530895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.531025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.531051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.531172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.531199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.531332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.531358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.531483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.531509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.531650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.531677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.531813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.531839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.531969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.531995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.532129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.532156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.532264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.532289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.532431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.532456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.532605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.532634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.532779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.532807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.532964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.532990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.533111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.533138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.533314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.533358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.533487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.533518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.533665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.533694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.533806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.533838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.533995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.534025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.534196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.534223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.534359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.534384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.534482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.534508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.534678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.070 [2024-07-25 23:39:03.534721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.070 qpair failed and we were unable to recover it. 00:33:06.070 [2024-07-25 23:39:03.534856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.534884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.535069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.535112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.535245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.535271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.535400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.535429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.535575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.535603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.535747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.535787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.535898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.535927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.536048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.536158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.536310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.536338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.536483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.536511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.536657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.536685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.536826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.536854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.536973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.536999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.537132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.537171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.537315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.537342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.537501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.537546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.537714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.537741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.537888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.537917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.538071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.538115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.538221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.538247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.538403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.538431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.538647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.538676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.538790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.538818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.538972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.539001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.539162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.539187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.539317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.539343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.539474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.539503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.539643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.539671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.539789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.539817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.539990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.540016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.540156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.540182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.540309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.071 [2024-07-25 23:39:03.540334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.071 qpair failed and we were unable to recover it. 00:33:06.071 [2024-07-25 23:39:03.540514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.540540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.540764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.540793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.540975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.541003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.541145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.541172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.541305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.541330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.541462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.541497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.541610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.541638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.541772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.541800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.541923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.541949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.542102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.542128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.542262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.542287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.542465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.542493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.542639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.542668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.542785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.542818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.542966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.542994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.543123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.543156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.543247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.543273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.543420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.543460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.543601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.543629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.543782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.543811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.543961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.543986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.544152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.544178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.544311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.544353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.544502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.544531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.544637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.544666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.544782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.544824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.544990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.545018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.545165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb2470 is same with the state(5) to be set 00:33:06.072 [2024-07-25 23:39:03.545323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.545381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.545539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.545580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.545702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.545749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.545896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.545931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.072 qpair failed and we were unable to recover it. 00:33:06.072 [2024-07-25 23:39:03.546106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.072 [2024-07-25 23:39:03.546135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.546381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.546413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.546584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.546614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.546763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.546791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.546963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.546991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.547185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.547211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.547341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.547374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.547478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.547504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.547642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.547668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.547819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.547875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.548014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.548043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.548206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.548232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.548364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.548389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.548522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.548565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.548705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.548733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.548871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.548899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.549044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.549079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.549236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.549261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.549376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.549401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.549539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.549567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.549704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.549731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.549879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.549908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.550053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.550088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.550242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.550268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.550371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.550397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.550500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.550525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.550673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.550698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.550887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.073 [2024-07-25 23:39:03.550915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.073 qpair failed and we were unable to recover it. 00:33:06.073 [2024-07-25 23:39:03.551081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.551107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.551211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.551236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.551390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.551415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.551520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.551563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.551680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.551708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.551840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.551867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.552026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.552077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.552204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.552229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.552363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.552389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.552525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.552552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.552736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.552764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.552925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.552949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.553091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.553116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.553224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.553250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.553355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.553379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.553564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.553592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.553735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.553763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.553918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.553943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.554151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.554186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.554328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.554369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.554523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.554548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.554649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.554674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.554812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.554851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.554986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.555013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.555123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.555149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.555277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.555302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.555435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.555461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.555566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.555592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.555718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.555746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.555880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.555905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.556036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.556067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.556205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.074 [2024-07-25 23:39:03.556231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.074 qpair failed and we were unable to recover it. 00:33:06.074 [2024-07-25 23:39:03.556349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.556374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.556521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.556565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.556707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.556735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.556898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.556927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.557056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.557105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.557232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.557257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.557400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.557424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.557602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.557629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.557764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.557792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.557933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.557958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.558090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.558115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.558248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.558273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.558418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.558442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.558542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.558566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.558708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.558732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.558870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.558895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.558998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.559024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.559171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.559197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.559320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.559346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.559450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.559475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.559604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.559631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.559798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.559823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.559961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.559986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.560157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.560183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.560296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.560321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.560450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.560475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.560668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.560694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.560822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.560848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.560952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.560977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.561081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.561107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.561237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.561262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.561392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.075 [2024-07-25 23:39:03.561418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.075 qpair failed and we were unable to recover it. 00:33:06.075 [2024-07-25 23:39:03.561552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.561577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.561718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.561742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.561873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.561898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.562032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.562066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.562217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.562242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.562355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.562381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.562542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.562566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.562703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.562728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.562879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.562907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.563024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.563052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.563194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.563220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.563354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.563378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.563542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.563566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.563706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.563732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.563877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.563902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.564065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.564093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.564222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.564248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.564380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.564405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.564591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.564619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.564771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.564796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.564902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.564926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.565082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.565110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.565246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.565271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.565379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.565403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.565507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.565550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.565704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.565729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.565841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.565866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.565997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.566025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.566157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.566182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.566296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.566321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.566476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.566504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.566652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.076 [2024-07-25 23:39:03.566678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.076 qpair failed and we were unable to recover it. 00:33:06.076 [2024-07-25 23:39:03.566774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.566799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.566949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.566977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.567101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.567126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.567272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.567297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.567449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.567474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.567633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.567657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.567806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.567834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.567944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.567979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.568117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.568142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.568255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.568280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.568429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.568458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.568611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.568635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.568736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.568762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.568940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.568968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.569091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.569117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.569244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.569270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.569403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.569428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.569560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.569585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.569695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.569720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.569821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.569846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.569956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.569981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.570120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.570160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.570276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.570305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.570445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.570470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.570645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.570672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.570794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.570822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.570954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.570979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.571136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.571178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.571294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.571323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.571460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.571485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.571586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.571611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.077 [2024-07-25 23:39:03.571761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.077 [2024-07-25 23:39:03.571788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.077 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.571938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.571962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.572098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.572124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.572255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.572283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.572466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.572492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.572639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.572667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.572783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.572811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.572965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.572990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.573125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.573167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.573339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.573367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.573511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.573535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.573641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.573666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.573849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.573877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.574002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.574027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.574163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.574190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.574331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.574356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.574490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.574515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.574651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.574680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.574783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.574808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.574909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.574935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.575072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.575097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.575225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.575253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.575401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.575427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.575532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.575557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.575677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.575702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.575810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.575835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.575941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.575967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.576109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.078 [2024-07-25 23:39:03.576134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.078 qpair failed and we were unable to recover it. 00:33:06.078 [2024-07-25 23:39:03.576272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.576298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.576406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.576430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.576586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.576614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.576761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.576786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.576890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.576914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.577038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.577077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.577185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.577210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.577349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.577389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.577510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.577537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.577712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.577737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.577851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.577878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.578030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.578055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.578219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.578244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.578401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.578427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.578594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.578619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.578750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.578774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.578873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.578902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.579015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.579040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.579153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.579179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.579295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.579320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.579488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.579512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.579617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.579643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.579771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.579796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.579895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.579919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.580052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.580083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.580229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.580255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.580383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.580407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.580521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.580547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.580649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.580674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.580772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.580797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.580996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.581055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.581198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.079 [2024-07-25 23:39:03.581226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.079 qpair failed and we were unable to recover it. 00:33:06.079 [2024-07-25 23:39:03.581383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.581413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.581547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.581592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.581744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.581787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.581948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.581973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.582087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.582133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.582252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.582279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.582467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.582494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.582635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.582663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.582796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.582821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.582996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.583020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.583141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.583167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.583274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.583298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.583429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.583458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.583628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.583657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.583789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.583816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.583961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.583989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.584133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.584159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.584295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.584319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.584434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.584459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.584634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.584663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.584804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.584832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.585008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.585036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.585183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.585221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.585340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.585367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.585474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.585502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.585658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.585688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.585833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.585861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.586002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.586029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.586180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.586206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.586325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.586350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.586506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.586534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.080 qpair failed and we were unable to recover it. 00:33:06.080 [2024-07-25 23:39:03.586677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.080 [2024-07-25 23:39:03.586705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.586852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.586880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.587023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.587053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.587212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.587238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.587408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.587437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.587547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.587576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.587715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.587744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.587935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.587961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.588094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.588125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.588288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.588314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.588439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.588467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.588581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.588609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.588763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.588791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.588945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.588970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.589115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.589141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.589274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.589300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.589420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.589462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.589598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.589626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.589751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.589794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.589951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.589976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.590090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.590117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.590255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.590284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.590435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.590463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.590681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.590709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.590837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.590879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.591044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.591074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.591206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.591231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.591363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.591388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.591524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.591567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.591687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.591716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.591937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.591965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.592129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.592155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.081 qpair failed and we were unable to recover it. 00:33:06.081 [2024-07-25 23:39:03.592264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.081 [2024-07-25 23:39:03.592289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.592458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.592486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.592660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.592689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.592819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.592845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.592973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.593014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.593201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.593227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.593377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.593405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.593627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.593656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.593778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.593806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.593975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.594004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.594190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.594229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.594371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.594417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.594599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.594628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.594774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.594817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.594954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.594980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.595103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.595130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.595256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.595306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.595493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.595537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.595681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.595726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.595851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.595877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.596012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.596039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.596214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.596258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.596415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.596446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.596627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.596656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.596815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.596844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.597026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.597056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.597254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.597281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.597437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.597466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.597634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.597664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.597809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.597839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.597998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.598024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.598151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.082 [2024-07-25 23:39:03.598190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.082 qpair failed and we were unable to recover it. 00:33:06.082 [2024-07-25 23:39:03.598326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.598353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.598560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.598589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.598759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.598788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.598970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.599020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.599179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.599205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.599307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.599333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.599465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.599493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.599611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.599639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.599824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.599876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.599992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.600020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.600181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.600206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.600342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.600373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.600520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.600561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.600733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.600762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.600869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.600897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.601010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.601038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.601173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.601200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.601330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.601355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.601512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.601539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.601658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.601700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.601844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.601871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.602009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.602037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.602203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.602241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.602403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.602434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.602595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.602624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.602771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.602815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.602964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.602993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.603148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.603175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.603285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.603312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.603494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.603520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.603681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.603708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.603852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.083 [2024-07-25 23:39:03.603880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.083 qpair failed and we were unable to recover it. 00:33:06.083 [2024-07-25 23:39:03.604036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.604065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.604177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.604201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.604309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.604351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.604475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.604516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.604634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.604662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.604810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.604841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.604994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.605028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.605175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.605201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.605359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.605386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.605565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.605608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.605736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.605763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.605885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.605928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.606036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.606066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.606194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.606218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.606318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.606368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.606613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.606640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.606772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.606799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.606926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.606951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.607132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.607161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.607294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.607322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.607450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.607478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.607624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.607652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.607761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.607789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.607925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.607956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.608101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.608139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.608303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.608330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.608520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.608573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.084 [2024-07-25 23:39:03.608746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.084 [2024-07-25 23:39:03.608794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.084 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.608953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.608979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.609138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.609165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.609311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.609339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.609505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.609550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.609696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.609739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.609907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.609938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.610072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.610099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.610226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.610270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.610430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.610475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.610586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.610613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.610752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.610779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.610910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.610934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.611039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.611070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.611199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.611227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.611358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.611397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.611605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.611633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.611839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.611884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.611997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.612023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.612183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.612209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.612362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.612391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.612530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.612572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.612702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.612746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.612857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.612884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.613015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.613040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.613158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.613184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.613339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.613375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.613480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.613504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.613609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.613635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.613740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.613764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.613902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.613927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.614033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.085 [2024-07-25 23:39:03.614070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.085 qpair failed and we were unable to recover it. 00:33:06.085 [2024-07-25 23:39:03.614181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.614222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.614367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.614401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.614549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.614576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.614720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.614747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.614863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.614891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.615004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.615031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.615159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.615185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.615336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.615363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.615531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.615558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.615674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.615702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.615818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.615845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.615988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.616016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.616150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.616179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.616291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.616318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.616502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.616550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.616709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.616753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.616862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.616888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.617019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.617045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.617188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.617214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.617372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.617401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.617570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.617597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.617793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.617821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.617965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.617993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.618131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.618156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.618282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.618307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.618419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.618444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.618604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.618632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.618805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.618832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.618950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.618983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.619186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.619213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.619327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.619352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.619486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.086 [2024-07-25 23:39:03.619510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.086 qpair failed and we were unable to recover it. 00:33:06.086 [2024-07-25 23:39:03.619633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.619660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.619770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.619799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.619927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.619952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.620089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.620115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.620271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.620296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.620402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.620445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.620606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.620650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.620844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.620872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.621031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.621084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.621244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.621274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.621401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.621428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.621537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.621563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.621747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.621776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.621918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.621947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.622111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.622138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.622250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.622275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.622395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.622419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.622558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.622600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.622742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.622769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.622880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.622907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.623019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.623047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.623204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.623228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.623330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.623355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.623540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.623576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.623719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.623747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.623891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.623920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.624064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.624091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.624236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.624262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.624404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.624430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.624579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.624608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.624722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.624750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.624904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.087 [2024-07-25 23:39:03.624932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.087 qpair failed and we were unable to recover it. 00:33:06.087 [2024-07-25 23:39:03.625063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.625089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.625191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.625216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.625361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.625387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.625517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.625545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.625687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.625715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.625875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.625931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.626104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.626132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.626261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.626304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.626483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.626527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.626676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.626719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.626829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.626856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.626993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.627019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.627194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.627222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.627368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.627397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.627503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.627531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.627670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.627717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.627848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.627890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.628039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.628074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.628209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.628236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.628369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.628395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.628536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.628580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.628706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.628734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.628882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.628910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.629082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.629109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.629242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.629267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.629422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.629451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.629567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.629595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.629733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.629761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.629869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.629898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.630019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.630044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.630178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.630203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.088 [2024-07-25 23:39:03.630359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.088 [2024-07-25 23:39:03.630385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.088 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.630542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.630574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.630766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.630809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.630966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.630997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.631168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.631206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.631345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.631390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.631549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.631593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.631775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.631818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.631927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.631953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.632103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.632128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.632289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.632318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.632427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.632454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.632592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.632621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.632731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.632759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.632912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.632940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.633089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.633115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.633234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.633259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.633382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.633410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.633535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.633563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.633687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.633716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.633845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.633871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.633974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.633999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.634129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.634155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.634302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.634328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.634483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.634511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.634630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.634658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.634816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.634844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.635012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.635040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.089 [2024-07-25 23:39:03.635195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.089 [2024-07-25 23:39:03.635224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.089 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.635353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.635378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.635507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.635549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.635736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.635764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.636006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.636034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.636157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.636182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.636313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.636355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.636485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.636527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.636646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.636674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.636784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.636812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.636945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.636973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.637178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.637204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.637329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.637372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.637500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.637525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.637716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.637744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.637885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.637914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.638044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.638094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.638218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.638243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.638374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.638400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.638523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.638551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.638698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.638726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.638860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.638903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.639072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.639098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.639233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.639259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.639423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.639451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.639656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.639684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.639804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.639833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.639974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.640007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.640159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.640185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.640314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.640355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.640485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.640513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.640653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.640681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.090 [2024-07-25 23:39:03.640808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.090 [2024-07-25 23:39:03.640833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.090 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.640994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.641022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.641178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.641204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.641325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.641353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.641493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.641521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.641637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.641666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.641815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.641844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.641985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.642013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.642170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.642196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.642346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.642388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.642524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.642555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.642700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.642728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.642847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.642876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.643052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.643103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.643217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.643266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.643418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.643448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.643626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.643654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.643769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.643797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.643906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.643933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.644071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.644113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.644211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.644236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.644334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.644377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.644496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.644524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.644651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.644679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.644795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.644823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.644980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.645008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.645187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.645226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.645367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.645412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.645563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.645606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.645759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.645801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.645910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.645937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.646072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.091 [2024-07-25 23:39:03.646099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.091 qpair failed and we were unable to recover it. 00:33:06.091 [2024-07-25 23:39:03.646220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.646270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.646460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.646503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.646635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.646664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.646823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.646850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.646981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.647007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.647153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.647179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.647303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.647332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.647464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.647489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.647613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.647641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.647773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.647801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.647915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.647944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.648110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.648136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.648278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.648306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.648459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.648487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.648633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.648661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.648802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.648830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.648946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.648974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.649132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.649158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.649324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.649351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.649490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.649519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.649698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.649742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.649903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.649929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.650071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.650097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.650277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.650322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.650452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.650495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.650642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.650686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.650825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.650851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.650963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.650990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.651108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.651153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.651315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.651345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.651493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.651528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.092 [2024-07-25 23:39:03.651684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.092 [2024-07-25 23:39:03.651720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.092 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.651865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.651894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.652030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.652056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.652173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.652198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.652295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.652320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.652455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.652485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.652658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.652686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.652815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.652845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.652975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.653003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.653116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.653142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.653291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.653334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.653451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.653494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.653621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.653664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.653771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.653801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.653907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.653934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.654120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.654172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.654348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.654377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.654499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.654525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.654666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.654691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.654803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.654829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.654962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.654987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.655149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.655178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.655349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.655378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.655496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.655524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.655641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.655687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.655838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.655864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.655995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.656020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.656143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.656169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.656280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.656306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.656452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.656477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.656643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.656669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.656803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.656830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.093 qpair failed and we were unable to recover it. 00:33:06.093 [2024-07-25 23:39:03.656992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.093 [2024-07-25 23:39:03.657018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.657186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.657216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.657361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.657390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.657507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.657535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.657666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.657715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.657856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.657884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.658010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.658035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.658174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.658202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.658333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.658379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.658535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.658580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.658771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.658800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.658957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.658982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.659100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.659126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.659306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.659349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.659497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.659539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.659708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.659753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.659888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.659914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.660105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.660134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.660277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.660320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.660445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.660489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.660639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.660682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.660821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.660847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.660984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.661011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.661177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.661222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.661348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.661392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.661558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.661585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.661717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.661743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.661851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.661877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.662036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.662068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.662174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.662200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.662336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.662362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.662470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.094 [2024-07-25 23:39:03.662497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.094 qpair failed and we were unable to recover it. 00:33:06.094 [2024-07-25 23:39:03.662632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.662657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.662790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.662815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.662921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.662946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.663097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.663138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.663261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.663290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.663475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.663505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.663632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.663658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.663855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.663904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.664051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.664085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.664194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.664220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.664453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.664483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.664653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.664703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.664852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.664880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.665022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.665051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.665201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.665227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.665333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.665360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.665526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.665560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.665724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.665754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.665909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.665938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.666074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.666100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.666258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.666284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.666437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.666466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.666579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.666609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.666821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.666850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.666976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.667006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.667150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.667177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.667283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.667309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.095 [2024-07-25 23:39:03.667442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.095 [2024-07-25 23:39:03.667468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.095 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.667663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.667693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.667903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.667933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.668047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.668083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.668216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.668242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.668353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.668383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.668535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.668564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.668743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.668772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.668981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.669011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.669149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.669176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.669309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.669351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.669496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.669526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.669703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.669732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.669873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.669902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.670056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.670089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.670222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.670247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.670409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.670452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.670614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.670646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.670798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.670828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.670975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.671006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.671138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.671165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.671290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.671316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.671468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.671500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.671657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.671687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.671862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.671891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.672067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.672113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.672246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.672272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.672408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.672434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.672615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.672663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.672796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.672851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.673003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.673035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.673260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.096 [2024-07-25 23:39:03.673286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.096 qpair failed and we were unable to recover it. 00:33:06.096 [2024-07-25 23:39:03.673448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.673486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.673670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.673701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.673904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.673933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.674122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.674149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.674279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.674307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.674504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.674533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.674656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.674699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.674832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.674863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.675008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.675039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.675191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.675229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.675371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.675398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.675536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.675580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.675700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.675728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.675884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.675910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.676109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.676136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.676263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.676306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.676475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.676518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.676632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.676660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.676817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.676843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.676973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.676999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.677168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.677195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.677374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.677417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.677540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.677584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.677715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.677741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.677883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.677911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.678013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.678039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.678195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.678224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.678366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.678395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.678578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.678607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.678752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.678782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.678926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.678953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.097 [2024-07-25 23:39:03.679051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.097 [2024-07-25 23:39:03.679088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.097 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.679246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.679272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.679426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.679455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.679574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.679602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.679825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.679854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.680001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.680027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.680171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.680215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.680352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.680381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.680589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.680618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.680791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.680839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.680982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.681010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.681136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.681162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.681269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.681295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.681448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.681475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.681588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.681628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.681766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.681815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.681935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.681963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.682110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.682137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.682245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.682273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.682404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.682432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.682582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.682611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.682727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.682756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.682884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.682926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.683080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.683127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.683244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.683270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.683400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.683426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.683553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.683582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.683775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.683800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.683955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.683983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.684127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.684153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.684263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.684288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.684418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.098 [2024-07-25 23:39:03.684446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.098 qpair failed and we were unable to recover it. 00:33:06.098 [2024-07-25 23:39:03.684615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.684643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.684789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.684837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.684952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.684980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.685157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.685184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.685318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.685343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.685450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.685476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.685610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.685659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.685807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.685835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.685973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.686000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.686123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.686151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.686287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.686313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.686473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.686502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.686673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.686703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.686849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.686878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.687035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.687067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.687203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.687228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.687330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.687355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.687477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.687530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.687749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.687796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.687939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.687968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.688082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.688125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.688256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.688281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.688398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.688440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.688613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.688642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.688786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.688814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.688970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.688996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.689113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.689138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.689260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.689286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.689482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.689531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.689708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.689765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.689927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.689955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.099 [2024-07-25 23:39:03.690125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.099 [2024-07-25 23:39:03.690151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.099 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.690286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.690311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.690483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.690511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.690696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.690745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.690893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.690921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.691100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.691139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.691264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.691292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.691415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.691459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.691588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.691647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.691879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.691932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.692071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.692097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.692221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.692252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.692387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.692429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.692555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.692583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.692692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.692720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.692867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.692895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.693056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.693126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.693292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.693323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.693499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.693528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.693643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.693672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.693811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.693840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.693997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.694026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.694197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.694235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.694367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.694415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.694601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.694644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.694774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.694827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.694954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.694981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.100 [2024-07-25 23:39:03.695117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.100 [2024-07-25 23:39:03.695144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.100 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.695321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.695367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.695528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.695555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.695712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.695738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.695899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.695927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.696105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.696135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.696280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.696309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.696452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.696481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.696616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.696667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.696808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.696837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.696990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.697015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.697202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.697240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.697372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.697403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.697583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.697611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.697826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.697874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.697989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.698017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.698173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.698200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.698329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.698372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.698516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.698544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.698712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.698740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.698875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.698903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.699037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.699075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.699205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.699231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.699366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.699407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.699543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.699577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.699710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.699751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.699889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.699917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.700069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.700113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.700241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.700266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.700394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.700422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.700560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.700587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.700695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.700723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.700900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.700928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.701069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.101 [2024-07-25 23:39:03.701117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.101 qpair failed and we were unable to recover it. 00:33:06.101 [2024-07-25 23:39:03.701223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.701248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.701423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.701451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.701595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.701623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.701769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.701798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.701920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.701951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.702177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.702204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.702339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.702366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.702478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.702521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.702690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.702718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.702860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.702890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.703003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.703033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.703214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.703252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.703388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.703415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.703552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.703604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.703789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.703842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.703962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.703989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.704133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.704171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.704304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.704352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.704500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.704529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.704644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.704673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.704790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.704819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.704969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.705007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.705155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.705183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.705293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.705319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.705482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.705510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.705639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.705692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.705887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.705915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.706027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.706055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.706215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.706241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.706404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.706430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.706563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.706591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.706744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.706774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.706931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.706959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.707117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.102 [2024-07-25 23:39:03.707173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.102 qpair failed and we were unable to recover it. 00:33:06.102 [2024-07-25 23:39:03.707324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.707363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.707522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.707567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.707723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.707753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.707896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.707923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.708033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.708067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.708222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.708265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.708421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.708464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.708617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.708646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.708820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.708862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.708993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.709018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.709168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.709211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.709366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.709396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.709569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.709619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.709805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.709857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.709968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.709997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.710166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.710192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.710362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.710415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.710637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.710687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.710827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.710880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.711029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.711055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.711205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.711230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.711384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.711412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.711587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.711616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.711795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.711823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.711936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.711964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.712091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.712117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.712273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.712299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.712461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.712529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.712755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.712802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.712928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.712956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.713125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.713152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.713258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.713283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.713418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.713460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.713578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.713608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.103 [2024-07-25 23:39:03.713782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.103 [2024-07-25 23:39:03.713811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.103 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.713961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.713986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.714125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.714151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.714277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.714307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.714463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.714491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.714631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.714659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.714788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.714830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.714966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.714995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.715127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.715153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.715254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.715280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.715405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.715431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.715605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.715660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.715842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.715871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.715984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.716012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.716144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.716171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.716298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.716326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.716505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.716534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.716685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.716714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.716857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.716886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.717042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.717076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.717254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.717279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.717420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.717448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.717569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.717597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.717713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.717742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.717952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.718007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.718150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.718178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.718330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.718374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.718520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.718563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.718716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.718759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.718897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.718923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.719090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.719121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.719258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.719284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.719453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.719482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.719623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.719652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.719800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.719829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.719984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.720010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.104 [2024-07-25 23:39:03.720142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.104 [2024-07-25 23:39:03.720167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.104 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.720299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.720341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.720454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.720483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.720625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.720654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.720801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.720830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.720982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.721007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.721109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.721134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.721266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.721291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.721453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.721481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.721621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.721649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.721794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.721822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.721941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.721969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.722106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.722132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.722270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.722296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.722506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.722534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.722727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.722755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.722864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.722893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.723006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.723034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.723193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.723232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.723406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.723434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.723551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.723595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.723739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.723767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.723917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.723943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.724057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.724093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.724192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.724218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.724352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.724379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.724512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.724539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.724676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.724702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.724807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.724833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.724945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.724970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.725082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.725108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.725251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.725277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.725416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.725441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.725608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.105 [2024-07-25 23:39:03.725633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.105 qpair failed and we were unable to recover it. 00:33:06.105 [2024-07-25 23:39:03.725789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.725818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.725999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.726027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.726195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.726221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.726328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.726353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.726514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.726555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.726692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.726721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.726867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.726895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.727056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.727092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.727230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.727255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.727410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.727438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.727557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.727585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.727727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.727755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.727863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.727891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.728001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.728030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.728193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.728219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.728341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.728369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.728484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.728512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.728658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.728686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.728805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.728831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.728987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.729013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.729120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.729146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.729245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.729272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.729424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.729452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.729580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.729622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.729740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.729769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.729891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.729919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.730033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.730063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.730202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.730228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.730334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.730363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.730494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.730523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.730690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.730718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.730821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.730849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.731016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.731055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.731242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.731270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.731375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.731401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.731551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.731597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.731728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.731772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.731933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.731959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.732096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.106 [2024-07-25 23:39:03.732123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.106 qpair failed and we were unable to recover it. 00:33:06.106 [2024-07-25 23:39:03.732231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.732256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.732356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.732381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.732520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.732545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.732659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.732685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.732817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.732843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.732973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.733000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.733125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.733151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.733287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.733312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.733503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.733563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.733718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.733744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.733906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.733934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.734120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.734146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.734276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.734318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.734435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.734463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.734608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.734638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.734780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.734808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.734919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.734953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.735088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.735115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.735245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.735289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.735444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.735487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.735654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.735680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.735833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.735877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.735991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.736017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.736213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.736240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.736396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.736424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.736561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.736589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.736692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.736720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.736838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.736866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.737036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.737073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.737257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.737302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.737458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.737502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.737654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.737696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.737846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.737902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.738025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.738051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.738177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.738220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.738377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.738407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.738547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.738591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.738723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.738748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.738860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.738886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.107 [2024-07-25 23:39:03.739024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.107 [2024-07-25 23:39:03.739050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.107 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.739192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.739218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.739374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.739400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.739558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.739584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.739746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.739772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.739907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.739932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.740032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.740065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.740201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.740230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.740359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.740385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.740522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.740548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.740679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.740705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.740814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.740841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.740950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.740976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.741110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.741137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.741243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.741268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.741371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.741396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.741513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.741541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.741663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.741691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.741811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.741839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.741951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.741979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.742159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.742204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.742388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.742432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.742553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.742597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.742739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.742786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.742943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.742968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.743145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.743189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.743412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.743458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.743598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.743624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.743747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.743773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.743886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.743912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.744048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.744080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.744220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.744245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.744369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.744397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.744541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.744569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.744714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.744742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.744888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.744915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.745083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.745110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.745231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.745261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.745460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.745505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.745659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.745701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.108 qpair failed and we were unable to recover it. 00:33:06.108 [2024-07-25 23:39:03.745828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.108 [2024-07-25 23:39:03.745853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.745985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.746011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.746139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.746183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.746333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.746375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.746500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.746547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.746658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.746684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.746814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.746839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.746997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.747023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.747154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.747200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.747384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.747427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.747607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.747651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.747786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.747811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.747915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.747941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.748101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.748132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.748251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.748279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.748424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.748452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.748614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.748666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.748804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.748832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.748958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.748987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.749178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.749208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.749384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.749427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.749575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.749604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.749781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.749825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.749985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.750010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.750165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.750210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.750336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.750379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.750557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.750606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.750775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.750801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.750960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.750986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.751147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.751173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.751332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.751361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.751501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.751533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.751657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.751685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.751801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.751829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.752003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.752030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.752156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.752182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.752300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.752343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.752489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.752533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.752725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.752767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.109 [2024-07-25 23:39:03.752866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.109 [2024-07-25 23:39:03.752891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.109 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.753023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.753049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.753191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.753217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.753314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.753339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.753489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.753517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.753628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.753656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.753780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.753808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.753952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.753978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.754113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.754139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.754261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.754289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.754410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.754438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.754582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.754610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.754747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.754776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.754926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.754951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.755104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.755130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.755263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.755288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.755404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.755432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.755598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.755626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.755796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.755823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.755980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.756015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.756155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.756182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.756342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.756385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.756513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.756559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.756706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.756749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.756851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.756877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.756983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.757009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.757147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.757174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.757304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.757329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.757455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.757481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.757588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.757615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.757725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.757751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.110 qpair failed and we were unable to recover it. 00:33:06.110 [2024-07-25 23:39:03.757856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.110 [2024-07-25 23:39:03.757883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.758016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.758041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.758210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.758235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.758336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.758362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.758466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.758492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.758616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.758645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.758806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.758833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.758964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.758990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.759114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.759143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.759286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.759315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.759478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.759522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.759686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.759711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.759842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.759867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.759981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.760007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.760141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.760167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.760295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.760345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.760499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.760542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.760673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.760698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.760835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.760860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.760996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.761022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.761219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.761265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.761408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.761434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.761564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.761589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.761722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.761747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.761859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.761884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.762020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.762051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.762187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.762212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.111 qpair failed and we were unable to recover it. 00:33:06.111 [2024-07-25 23:39:03.762362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.111 [2024-07-25 23:39:03.762390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.762532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.762560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.762740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.762768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.762948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.762975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.763111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.763138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.763291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.763338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.763515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.763567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.763703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.763746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.763879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.763905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.764012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.764039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.764153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.764179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.764307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.764336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.764485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.764514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.764632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.764660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.764771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.764799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.764967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.764996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.765121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.765150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.765284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.765312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.765428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.765456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.765600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.765628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.765742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.765770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.765904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.765932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.766080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.766122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.766252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.766278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.766430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.766458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.766581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.766609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.766727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.766752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.766936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.766964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.767136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.767162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.767266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.767291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.112 [2024-07-25 23:39:03.767432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.112 [2024-07-25 23:39:03.767460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.112 qpair failed and we were unable to recover it. 00:33:06.385 [2024-07-25 23:39:03.767601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.385 [2024-07-25 23:39:03.767629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.385 qpair failed and we were unable to recover it. 00:33:06.385 [2024-07-25 23:39:03.767746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.385 [2024-07-25 23:39:03.767776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.385 qpair failed and we were unable to recover it. 00:33:06.385 [2024-07-25 23:39:03.767927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.385 [2024-07-25 23:39:03.767955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.385 qpair failed and we were unable to recover it. 00:33:06.385 [2024-07-25 23:39:03.768078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.385 [2024-07-25 23:39:03.768121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.385 qpair failed and we were unable to recover it. 00:33:06.385 [2024-07-25 23:39:03.768227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.385 [2024-07-25 23:39:03.768252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.385 qpair failed and we were unable to recover it. 00:33:06.385 [2024-07-25 23:39:03.768387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.385 [2024-07-25 23:39:03.768415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.385 qpair failed and we were unable to recover it. 00:33:06.385 [2024-07-25 23:39:03.768547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.385 [2024-07-25 23:39:03.768591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.385 qpair failed and we were unable to recover it. 00:33:06.385 [2024-07-25 23:39:03.768717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.768761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.768864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.768890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.769016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.769042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.769152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.769178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.769330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.769381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.769507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.769551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.769729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.769775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.769887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.769914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.770027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.770052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.770195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.770224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.770333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.770361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.770485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.770513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.770655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.770683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.770800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.770828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.770984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.771009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.771117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.771152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.771278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.771306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.771453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.771481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.771626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.771655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.771845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.771891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.772025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.772051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.772172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.772198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.772319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.772348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.772554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.772598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.772738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.772782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.772892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.772918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.773050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.773083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.773264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.773314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.773514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.773558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.773736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.773765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.773886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.773911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.774047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.774089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.774240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.774270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.774449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.774475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.774635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.774661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.774820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.774846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.774980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.775006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.775164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.775211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.775363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.775392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.775592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.775635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.775768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.775794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.775928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.775954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.776070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.776097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.776277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.776303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.776458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.776484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.776595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.776621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.776733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.776759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.776891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.776918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.777078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.777104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.777230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.777261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.777381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.777409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.777548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.777576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.777727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.777752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.777884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.777910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.778044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.778083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.778221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.778250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.778368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.778397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.778537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.778565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.778739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.778785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.778891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.778918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.779026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.779052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.779217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.779261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.779385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.779413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.779575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.779619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.779728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.779755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.779890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.779916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.780045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.780076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.780234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.780260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.780377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.780405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.780548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.780577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.780722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.780750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.386 qpair failed and we were unable to recover it. 00:33:06.386 [2024-07-25 23:39:03.780891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.386 [2024-07-25 23:39:03.780920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.781048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.781079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.781234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.781262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.781382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.781411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.781559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.781587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.781732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.781760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.781907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.781935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.782057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.782106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.782239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.782264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.782394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.782437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.782590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.782633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.782844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.782872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.782974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.783003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.783150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.783176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.783307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.783365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.783502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.783547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.783730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.783774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.783905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.783932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.784073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.784099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.784208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.784240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.784347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.784373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.784544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.784569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.784677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.784704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.784834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.784861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.784969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.784995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.785122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.785151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.785298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.785327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.785474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.785502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.785616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.785644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.785799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.785830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.785952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.785979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.786127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.786171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.786351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.786395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.786563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.786589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.786718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.786744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.786860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.786887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.787019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.787045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.787189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.787215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.787354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.787382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.787525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.787553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.787693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.787721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.787904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.787948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.788088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.788115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.788260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.788305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.788459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.788508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.788688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.788731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.788863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.788889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.789021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.789048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.789190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.789215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.789317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.789363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.789484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.789512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.789628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.789657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.789827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.789856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.789978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.790005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.790145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.790172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.790304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.790333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.790555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.790610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.790756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.790800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.790959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.790985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.791115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.791145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.791260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.791288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.791408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.791437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.791580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.791608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.791730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.791771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.791937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.791966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.792114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.792141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.792275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.792301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.792452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.792480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.792607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.792650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.792783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.792825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.387 [2024-07-25 23:39:03.792962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.387 [2024-07-25 23:39:03.792990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.387 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.793108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.793134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.793267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.793292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.793466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.793494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.793665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.793693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.793836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.793865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.793998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.794026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.794196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.794236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.794377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.794405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.794530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.794559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.794722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.794766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.794870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.794897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.795082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.795123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.795288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.795319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.795460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.795491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.795606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.795635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.795785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.795815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.795990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.796018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.796154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.796180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.796315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.796357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.796505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.796549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.796743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.796772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.796903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.796928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.797068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.797094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.797203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.797228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.797336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.797380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.797541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.797570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.797686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.797716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.797939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.797970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.798111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.798140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.798276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.798302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.798434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.798478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.798623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.798653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.798797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.798827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.798953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.798984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.799117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.799143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.799275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.799300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.799455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.799522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.799709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.799756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.799973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.800001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.800131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.800157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.800293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.800319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.800557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.800585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.800748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.800798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.800915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.800943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.801088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.801114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.801243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.801269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.801418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.801446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.801579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.801622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.801737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.801766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.801939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.801967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.802094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.802120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.802245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.802284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.802430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.802457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.802613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.802643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.802752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.802782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.802923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.802952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.803117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.803156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.803292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.803319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.803452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.803495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.803625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.803651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.803808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.803851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.804010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.804036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.388 qpair failed and we were unable to recover it. 00:33:06.388 [2024-07-25 23:39:03.804208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.388 [2024-07-25 23:39:03.804240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.804415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.804473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.804618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.804651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.804764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.804793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.804939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.804967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.805128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.805154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.805260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.805287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.805409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.805438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.805609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.805638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.805752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.805780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.805934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.805961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.806071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.806098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.806244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.806271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.806426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.806456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.806602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.806631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.806807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.806837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.806981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.807007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.807142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.807168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.807302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.807330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.807503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.807531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.807662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.807691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.807846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.807874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.808018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.808049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.808210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.808240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.808374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.808417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.808601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.808630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.808754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.808785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.808897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.808926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.809085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.809140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.809263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.809296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.809436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.809462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.809630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.809690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.809806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.809834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.809985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.810010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.810117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.810143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.810264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.810289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.810458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.810484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.810644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.810673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.810812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.810840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.810968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.810993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.811098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.811126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.811265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.811292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.811400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.811426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.811606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.811632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.811772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.811804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.811932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.811976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.812149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.812175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.812312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.812339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.812497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.812527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.812675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.812704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.812880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.812909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.813073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.813100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.813238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.813265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.813392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.813421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.813571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.813604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.813762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.813794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.813952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.813985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.814138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.814165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.814314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.814369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.814531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.814578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.814844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.814892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.815074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.815116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.815275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.815301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.815407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.815433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.815565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.815590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.815759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.815785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.815895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.815921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.816048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.816086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.816216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.816245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.816402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.389 [2024-07-25 23:39:03.816427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.389 qpair failed and we were unable to recover it. 00:33:06.389 [2024-07-25 23:39:03.816568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.816593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.816731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.816757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.816896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.816922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.817018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.817043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.817210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.817239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.817416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.817441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.817594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.817622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.817739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.817780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.817945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.817970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.818116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.818145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.818293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.818321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.818480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.818505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.818605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.818630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.818771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.818799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.818958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.818983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.819131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.819157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.819265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.819290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.819399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.819424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.819559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.819601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.819772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.819800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.819954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.819979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.820082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.820108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.820223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.820252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.820381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.820407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.820512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.820536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.820634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.820660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.820761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.820787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.820972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.821015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.821153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.821185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.821347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.821377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.821521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.821550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.821712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.821755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.821881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.821907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.822014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.822040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.822194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.822221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.822404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.822430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.822585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.822630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.822788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.822814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.822940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.822966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.823082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.823110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.823244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.823282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.823463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.823489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.823592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.823618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.823760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.823787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.823946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.823972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.824133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.824163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.824349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.824379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.824531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.824557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.824721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.824766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.824909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.824938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.825071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.825099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.825210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.825237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.825376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.825405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.825561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.825587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.825729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.825772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.825908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.825934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.826072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.826100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.826211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.826253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.826379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.826410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.826565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.826591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.826774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.826803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.826961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.826987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.827155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.827182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.827296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.827323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.827465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.827492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.827722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.827751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.827902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.827932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.828074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.828117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.390 [2024-07-25 23:39:03.828262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.390 [2024-07-25 23:39:03.828297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.390 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.828465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.828508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.828628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.828663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.828827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.828854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.828984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.829025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.829161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.829187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.829298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.829325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.829481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.829507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.829688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.829717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.829869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.829896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.830032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.830066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.830213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.830241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.830350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.830376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.830540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.830567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.830753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.830802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.830986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.831013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.831121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.831149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.831264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.831289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.831409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.831436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.831577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.831620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.831842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.831871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.832054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.832087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.832234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.832260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.832412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.832442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.832570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.832597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.832701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.832727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.832853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.832882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.833013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.833039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.833179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.833205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.833316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.833343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.833480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.833506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.833633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.833679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.833827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.833857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.833983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.834012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.834171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.834210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.834364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.834394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.834555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.834581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.834740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.834798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.834958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.834984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.835151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.835182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.835287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.835313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.835410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.835435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.835560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.835585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.835723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.835768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.835887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.835916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.836046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.836083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.836226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.836252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.836398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.836424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.836553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.836578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.836714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.836739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.836886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.836912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.837063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.837088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.837197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.837222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.837359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.837385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.837492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.837518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.837672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.837697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.837823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.837851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.838005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.838031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.838206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.838231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.838344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.838369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.838532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.838557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.838675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.838703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.838819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.838847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.839024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.839050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.839166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.839194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.839307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.839333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.839443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.839473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.839580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.839606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.839736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.839766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.839884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.839910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.840026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.840051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.391 qpair failed and we were unable to recover it. 00:33:06.391 [2024-07-25 23:39:03.840193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.391 [2024-07-25 23:39:03.840219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.840353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.840380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.840482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.840508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.840638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.840663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.840815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.840841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.840953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.840980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.841150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.841176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.841275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.841302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.841437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.841478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.841629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.841658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.841816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.841843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.842017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.842046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.842183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.842209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.842343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.842369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.842513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.842539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.842728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.842770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.842933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.842960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.843094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.843121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.843257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.843283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.843391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.843417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.843570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.843595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.843777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.843832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.843992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.844018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.844161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.844188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.844300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.844327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.844519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.844545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.844672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.844716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.844836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.844866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.845013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.845039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.845182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.845208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.845318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.845344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.845485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.845511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.845644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.845669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.845840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.845866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.846028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.846054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.846169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.846200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.846309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.846335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.846505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.846531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.846712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.846766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.846883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.846912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.847095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.847122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.847280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.847306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.847461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.847513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.847647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.847673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.847832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.847858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.847991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.848020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.848208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.848233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.848345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.848371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.848501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.848526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.848689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.848715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.848864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.848893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.849053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.849088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.849240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.849266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.849383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.849409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.849511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.849537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.849637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.849663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.849799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.849825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.850000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.850029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.850193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.850219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.850325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.850351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.850457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.850483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.850650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.850676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.850816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.850842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.850979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.851004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.851192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.851219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.851315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.851342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.851464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.851504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.851664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.851690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.851790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.851816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.851967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.392 [2024-07-25 23:39:03.851993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.392 qpair failed and we were unable to recover it. 00:33:06.392 [2024-07-25 23:39:03.852137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.852164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.852293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.852320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.852476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.852505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.852633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.852659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.852828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.852871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.853027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.853070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.853204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.853231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.853330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.853356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.853512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.853538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.853681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.853708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.853807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.853833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.853967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.853993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.854101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.854128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.854262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.854288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.854468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.854496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.854656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.854682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.854812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.854838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.854994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.855024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.855192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.855219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.855328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.855354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.855516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.855541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.855702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.855727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.855872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.855915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.856028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.856077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.856233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.856259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.856361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.856402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.856545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.856573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.856751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.856777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.856891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.856916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.857053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.857085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.857193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.857219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.857347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.857373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.857576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.857619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.857775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.857802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.857935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.857979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.858123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.858153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.858288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.858314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.858473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.858499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.858673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.858702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.858851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.858877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.859007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.859033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.859173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.859201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.859358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.859384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.859543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.859569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.859732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.859761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.859911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.859943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.860076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.860121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.860306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.860332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.860435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.860462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.860567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.860593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.860709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.860738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.860872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.860898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.861057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.861105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.861265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.861290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.861423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.861448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.861583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.861625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.861799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.861857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.862016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.862042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.862155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.862182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.862350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.862377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.862486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.862513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.862614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.862640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.862802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.862832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.862954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.862980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.863142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.863184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.863327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.863355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.863533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.863559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.863691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.863718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.863825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.393 [2024-07-25 23:39:03.863851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.393 qpair failed and we were unable to recover it. 00:33:06.393 [2024-07-25 23:39:03.864007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.864033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.864146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.864172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.864278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.864303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.864451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.864476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.864632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.864658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.864785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.864814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.864994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.865020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.865149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.865175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.865278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.865304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.865415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.865441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.865598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.865624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.865811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.865836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.865966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.865993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.866185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.866214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.866326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.866355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.866533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.866558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.866656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.866701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.866847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.866875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.867022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.867047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.867165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.867190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.867320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.867345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.867487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.867512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.867644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.867670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.867803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.867829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.867931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.867957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.868066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.868091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.868235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.868263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.868393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.868419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.868519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.868545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.868717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.868742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.868877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.868903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.869004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.869029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.869165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.869192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.869319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.869345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.869442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.869468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.869601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.869626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.869758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.869783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.869888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.869914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.870007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.870067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.870231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.870256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.870383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.870426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.870550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.870578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.870727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.870753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.870892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.870917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.871096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.871122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.871255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.871280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.871383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.871409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.871545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.871571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.871704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.871731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.871867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.871892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.872068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.872094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.872225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.872250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.872390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.872433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.872547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.872576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.872710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.872736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.872893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.872919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.873017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.873047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.873266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.873291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.873466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.873494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.873641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.873669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.873882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.873907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.874079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.874108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.874252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.874280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.874431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.874457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.874640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.874669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.874813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.874841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.875019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.875045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.875203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.875232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.875379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.875404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.875560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.875585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.875695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.394 [2024-07-25 23:39:03.875737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.394 qpair failed and we were unable to recover it. 00:33:06.394 [2024-07-25 23:39:03.875878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.875906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.876052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.876083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.876214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.876240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.876440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.876483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.876622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.876650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.876779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.876805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.876958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.876988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.877119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.877145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.877277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.877304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.877458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.877487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.877640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.877666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.877777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.877804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.877973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.878015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.878159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.878185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.878320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.878345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.878448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.878491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.878650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.878675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.878836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.878878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.878997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.879026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.879168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.879194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.879341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.879366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.879493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.879548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.879723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.879748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.879926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.879954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.880129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.880168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.880282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.880315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.880506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.880536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.880652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.880682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.880839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.880865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.881043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.881080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.881201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.881231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.881368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.881395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.881557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.881583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.881734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.881790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.881970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.881996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.882134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.882162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.882301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.882326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.882500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.882526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.882665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.882690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.882829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.882860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.883011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.883037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.883153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.883180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.883320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.883346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.883474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.883508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.883616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.883642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.883814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.883840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.884000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.884025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.884139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.884166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.884299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.884342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.884462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.884487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.884595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.884621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.884738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.884766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.884899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.884930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.885070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.885115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.885224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.885253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.885373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.885400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.885558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.885584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.885769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.885819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.885973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.885998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.886117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.886143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.886274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.886300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.886433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.886459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.886564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.886589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.886784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.886814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.886997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.887023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.395 [2024-07-25 23:39:03.887169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.395 [2024-07-25 23:39:03.887196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.395 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.887369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.887412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.887569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.887595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.887724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.887750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.887910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.887940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.888074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.888100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.888201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.888227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.888382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.888411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.888561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.888586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.888698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.888725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.888896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.888928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.889087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.889114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.889272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.889302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.889487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.889538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.889689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.889715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.889855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.889881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.890038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.890074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.890211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.890239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.890369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.890395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.890524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.890549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.890682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.890708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.890812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.890837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.890996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.891026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.891193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.891219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.891350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.891394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.891536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.891566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.891715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.891741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.891839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.891870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.892006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.892031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.892178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.892204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.892311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.892337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.892444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.892471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.892627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.892653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.892792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.892821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.892953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.892981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.893130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.893156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.893336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.893365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.893545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.893593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.893712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.893737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.893869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.893894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.894019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.894044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.894203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.894229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.894385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.894429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.894584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.894610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.894768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.894794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.894901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.894929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.895054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.895090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.895208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.895234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.895370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.895412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.895620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.895671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.895820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.895846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.895976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.896020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.896172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.896215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.896384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.896412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.896597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.896627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.896801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.896851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.896977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.897003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.897111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.897138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.897251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.897278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.897389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.897416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.897553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.897580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.897717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.897743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.897910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.897936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.898040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.898096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.898261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.898287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.898409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.898435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.898543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.898569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.898733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.898764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.898959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.898985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.899091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.899133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.899317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.899343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.899503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.396 [2024-07-25 23:39:03.899529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.396 qpair failed and we were unable to recover it. 00:33:06.396 [2024-07-25 23:39:03.899676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.899706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.899927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.899956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.900085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.900112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.900210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.900236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.900355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.900383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.900575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.900601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.900739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.900766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.900872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.900898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.901028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.901054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.901226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.901255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.901387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.901414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.901575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.901601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.901711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.901754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.901905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.901931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.902073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.902099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.902206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.902232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.902363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.902390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.902490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.902517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.902654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.902680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.902854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.902880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.903006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.903032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.903142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.903168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.903302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.903328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.903459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.903485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.903587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.903613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.903725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.903753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.903863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.903889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.904018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.904044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.904163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.904190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.904323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.904349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.904450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.904476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.904632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.904658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.904831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.904857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.904986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.905012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.905186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.905213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.905314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.905344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.905479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.905505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.905634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.905662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.905793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.905819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.905923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.905949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.906099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.906126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.906283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.906309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.906439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.906465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.906676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.906727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.906886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.906911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.907048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.907079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.907200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.907226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.907379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.907405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.907508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.907533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.907695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.907723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.907877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.907902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.908033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.908064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.908250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.908278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.908405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.908431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.908568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.908593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.908729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.908755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.908858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.908884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.908987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.909013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.909120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.909146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.909308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.909333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.909459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.909484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.909587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.909613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.909776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.909802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.909954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.909982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.910147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.910191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.910353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.910381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.910512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.910555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.910744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.910770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.910879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.397 [2024-07-25 23:39:03.910905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.397 qpair failed and we were unable to recover it. 00:33:06.397 [2024-07-25 23:39:03.911007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.911033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.911204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.911237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.911382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.911408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.911510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.911536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.911659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.911687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.911845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.911870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.912000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.912030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.912142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.912168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.912330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.912355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.912497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.912525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.912659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.912687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.912865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.912890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.913025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.913050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.913204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.913246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.913398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.913423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.913533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.913559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.913688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.913714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.913881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.913907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.914025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.914054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.914211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.914239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.914408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.914434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.914609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.914638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.914761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.914789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.914940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.914966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.915075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.915101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.915231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.915256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.915390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.915417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.915601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.915630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.915747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.915775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.915955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.915981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.916086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.916113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.916244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.916269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.916373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.916399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.916530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.916557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.916725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.916751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.916882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.916908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.917036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.917066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.917201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.917227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.917324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.917350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.917456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.917482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.917668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.917694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.917828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.917853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.917982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.918024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.918200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.918230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.918386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.918411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.918519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.918544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.918693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.918726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.918874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.918899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.919022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.919048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.919254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.919283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.919432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.919458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.919636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.919664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.919812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.919837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.919944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.919969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.920099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.920126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.920225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.920251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.920353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.920379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.920536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.920579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.920721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.920749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.920914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.920939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.921069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.921111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.921254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.398 [2024-07-25 23:39:03.921282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.398 qpair failed and we were unable to recover it. 00:33:06.398 [2024-07-25 23:39:03.921419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.921444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.921572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.921597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.921771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.921799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.921955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.921980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.922140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.922169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.922323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.922351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.922508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.922533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.922665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.922690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.922816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.922844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.922999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.923024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.923164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.923189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.923345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.923374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.923526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.923551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.923680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.923722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.923877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.923902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.924031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.924056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.924193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.924218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.924343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.924368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.924472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.924498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.924599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.924624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.924781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.924809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.924968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.924993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.925130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.925156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.925285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.925310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.925448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.925477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.925610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.925655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.925798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.925826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.925980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.926005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.926119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.926144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.926272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.926298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.926407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.926432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.926530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.926555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.926663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.926689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.926811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.926837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.926970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.927012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.927186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.927214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.927366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.927392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.927545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.927588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.927764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.927792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.927950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.927975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.928153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.928182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.928367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.928392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.928490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.928515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.928646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.928671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.928803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.928832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.928956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.928982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.929112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.929139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.929313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.929361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.929514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.929539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.929643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.929668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.929793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.929834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.929980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.930006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.930123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.930149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.930279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.930304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.930446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.930472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.930617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.930646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.930769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.930797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.930975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.931000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.931111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.931164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.931339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.931367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.931488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.931514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.931670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.931696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.931829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.931858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.932035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.932066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.932193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.932241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.932362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.932390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.932538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.932564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.932672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.932697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.932834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.932860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.932960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.932986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.933083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.933109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.933300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.933328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.933486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.933512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.399 qpair failed and we were unable to recover it. 00:33:06.399 [2024-07-25 23:39:03.933644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.399 [2024-07-25 23:39:03.933685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.933828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.933857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.934015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.934040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.934180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.934206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.934342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.934368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.934573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.934607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.934747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.934772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.934902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.934929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.935068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.935094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.935272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.935300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.935422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.935450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.935623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.935649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.935784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.935810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.935936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.935961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.936099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.936126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.936257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.936283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.936439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.936476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.936658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.936683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.936846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.936875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.936998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.937025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.937175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.937201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.937306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.937332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.937465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.937490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.937621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.937646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.937780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.937822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.937948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.937974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.938091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.938117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.938226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.938251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.938354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.938380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.938486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.938511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.938672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.938698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.938826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.938873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.939007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.939033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.939150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.939175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.939273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.939298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.939449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.939474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.939604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.939630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.939756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.939781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.939916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.939941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.940042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.940074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.940240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.940268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.940423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.940448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.940627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.940656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.940777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.940819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.940955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.940981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.941107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.941133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.941272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.941298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.941425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.941450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.941549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.941574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.941706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.941735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.941869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.941894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.941989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.942015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.942180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.942207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.942339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.942365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.942546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.942574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.942715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.942743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.942888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.942913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.943046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.943078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.943278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.943304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.943476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.943501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.943629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.943655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.943788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.943813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.943979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.944004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.944169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.400 [2024-07-25 23:39:03.944196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.400 qpair failed and we were unable to recover it. 00:33:06.400 [2024-07-25 23:39:03.944302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.944328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.944496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.944522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.944681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.944724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.944843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.944871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.945034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.945077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.945216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.945241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.945361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.945389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.945518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.945560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.945692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.945718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.945909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.945936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.946042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.946073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.946205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.946231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.946429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.946457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.946578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.946604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.946763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.946788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.946938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.946966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.947113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.947139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.947284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.947309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.947431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.947457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.947630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.947655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.947787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.947813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.947998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.948027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.948182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.948209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.948340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.948395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.948513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.948541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.948697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.948722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.948855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.948898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.949041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.949074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.949231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.949257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.949412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.949448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.949573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.949601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.949732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.949758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.949913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.949954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.950079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.950108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.950261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.950287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.950411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.950452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.950606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.950634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.950763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.950788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.950897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.950923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.951045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.951098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.951231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.951256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.951386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.951427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.951600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.951629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.951773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.951800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.951936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.951965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.952099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.952125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.952256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.952282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.952393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.952438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.952604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.952630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.952739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.952765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.952900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.952925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.953113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.953141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.953270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.953295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.953423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.953448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.953596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.953625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.953778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.953804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.953901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.953927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.954093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.954120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.954226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.954252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.954365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.954390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.954496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.954521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.954662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.954687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.954821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.954847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.954982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.955007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.955147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.955173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.955276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.955302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.955442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.955468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.955603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.955629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.955735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.955762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.955869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.955895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.956000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.956026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.401 qpair failed and we were unable to recover it. 00:33:06.401 [2024-07-25 23:39:03.956140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.401 [2024-07-25 23:39:03.956166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.956276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.956302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.956447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.956472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.956611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.956637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.956810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.956836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.956941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.956967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.957088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.957115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.957272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.957297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.957417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.957442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.957629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.957657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.957802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.957830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.958033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.958063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.958223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.958248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.958409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.958437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.958573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.958598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.958707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.958732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.958861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.958893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.959031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.959057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.959176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.959201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.959345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.959373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.959527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.959553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.959688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.959714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.959868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.959896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.960044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.960076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.960182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.960208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.960333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.960359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.960496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.960522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.960686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.960711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.960833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.960865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.961015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.961041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.961172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.961199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.961332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.961357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.961499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.961524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.961627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.961667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.961850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.961876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.961979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.962005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.962174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.962200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.962309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.962336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.962445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.962471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.962605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.962630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.962752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.962781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.962968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.963004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.963128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.963154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.963251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.963277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.963377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.963402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.963504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.963531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.963661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.963691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.963831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.963856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.964018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.964044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.964166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.964193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.964325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.964361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.964495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.964520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.964633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.964659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.964789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.964816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.964980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.965006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.965128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.965154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.965253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.965282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.965392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.965417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.965618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.965644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.965772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.965797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.965927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.965952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.966064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.966089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.966222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.966246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.966357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.966382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.966549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.966574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.966740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.966765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.966898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.966923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.967098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.967124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.967235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.967259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.967361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.967385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.402 qpair failed and we were unable to recover it. 00:33:06.402 [2024-07-25 23:39:03.967529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.402 [2024-07-25 23:39:03.967571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.967738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.967764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.967892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.967918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.968051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.968082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.968214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.968240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.968347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.968380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.968530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.968558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.968706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.968732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.968832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.968857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.969009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.969038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.969173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.969197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.969307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.969332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.969436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.969461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.969593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.969621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.969754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.969800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.969971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.970000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.970164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.970188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.970295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.970321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.970454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.970482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.970613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.970638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.970741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.970766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.970954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.970982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.971170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.971195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.971303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.971328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.971471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.971496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.971627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.971652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.971757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.971782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.971891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.971916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.972029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.972055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.972203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.972228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.972364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.972389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.972531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.972557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.972689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.972714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.972878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.972904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.973041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.973074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.973210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.973234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.973366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.973391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.973493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.973517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.973652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.973676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.973797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.973823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.973955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.973980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.974092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.974119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.974227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.974251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.974387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.974412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.974550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.974576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.974710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.974735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.974866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.974892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.975021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.975046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.975162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.975188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.975349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.975380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.975520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.975546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.975685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.975711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.975848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.975873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.976008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.976038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.976182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.976208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.976339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.976364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.976503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.976545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.976690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.976718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.976846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.976872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.977010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.977036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.977207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.977235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.977368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.977394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.977551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.977577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.403 [2024-07-25 23:39:03.977746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.403 [2024-07-25 23:39:03.977774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.403 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.977938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.977962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.978098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.978143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.978278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.978306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.978460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.978485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.978637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.978663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.978796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.978824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.978983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.979007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.979834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.979867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.980054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.980096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.980257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.980282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.980441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.980470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.980602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.980630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.980786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.980812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.980924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.980949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.981057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.981088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.981221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.981248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.981411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.981453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.981628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.981653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.981762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.981787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.981934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.981960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.982067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.982094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.982223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.982252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.982410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.982436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.982568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.982610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.982730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.982758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.982895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.982921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.983114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.983142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.983261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.983290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.983418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.983443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.983580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.983609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.983776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.983804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.983926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.983951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.984086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.984112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.984223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.984249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.984359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.984389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.984498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.984524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.984675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.984704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.984857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.984882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.985014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.985040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.985149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.985175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.985271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.985297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.985392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.985418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.985544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.985572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.985761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.985787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.985910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.985936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.986091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.986117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.986221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.986246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.986344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.986380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.986482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.986508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.986668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.986704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.986896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.986937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.987071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.987096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.987226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.987251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.987398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.987424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.987559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.987601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.987737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.987763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.987871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.987897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.988041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.988097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.988257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.988284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.988423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.988465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.988609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.988638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.988760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.988786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.988920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.988946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.989112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.989138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.989272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.989298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.989442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.989483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.989639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.989665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.989820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.989846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.990002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.990030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.990201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.990230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.404 [2024-07-25 23:39:03.990362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.404 [2024-07-25 23:39:03.990388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.404 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.990526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.990568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.990690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.990718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.990852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.990878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.991015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.991041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.991177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.991205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.991354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.991380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.991525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.991552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.991672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.991697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.991862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.991888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.992069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.992098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.992209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.992256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.992380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.992406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.992545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.992571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.992711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.992736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.992912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.992937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.993064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.993090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.993219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.993245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.993364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.993390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.993494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.993519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.993650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.993682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.993835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.993861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.993990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.994015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.994156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.994200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.994325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.994351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.994527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.994555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.994704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.994734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.994918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.994943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.995095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.995125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.995302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.995327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.995464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.995490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.995649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.995692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.995803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.995835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.995991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.996016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.996196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.996222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.996322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.996347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.996482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.996508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.996679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.996711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.996846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.996875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.997042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.997084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.997185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.997227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.997340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.997369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.997502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.997529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.997636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.997662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.997820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.997855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.997977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.998003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.998135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.998160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.998267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.998293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.998458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.998484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.998582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.998615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.998725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.998751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.998875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.998900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.999027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.999053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.999204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.999246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.999376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.999402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.999543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.999568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.999735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.999763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:03.999953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:03.999978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:04.000131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:04.000161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:04.000303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:04.000331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:04.000495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:04.000521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:04.000628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.405 [2024-07-25 23:39:04.000654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.405 qpair failed and we were unable to recover it. 00:33:06.405 [2024-07-25 23:39:04.000797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.000825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.000947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.000973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.001129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.001186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.001307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.001337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.001530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.001556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.001665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.001691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.001787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.001813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.001948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.001974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.002088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.002131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.002240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.002269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.002415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.002449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.002553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.002578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.002719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.002744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.002856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.002882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.003039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.003072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.003176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.003201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.003307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.003333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.003467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.003493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.003632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.003658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.003796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.003822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.003931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.003956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.004115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.004141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.004239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.004264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.004404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.004431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.004613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.004640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.004758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.004784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.004918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.004944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.005075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.005102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.005231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.005257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.005389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.005432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.005588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.005616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.005737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.005767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.005926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.005951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.006140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.006167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.006303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.006328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.006433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.006458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.006588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.006613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.006741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.006767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.006870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.006897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.007007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.007032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.007181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.007207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.007313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.007338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.007473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.007498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.007605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.007631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.007733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.007758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.007881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.007907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.008037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.008069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.008172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.008197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.008324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.008350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.008449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.008474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.008613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.008639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.008791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.008820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.008971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.008996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.009125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.009152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.009288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.009313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.009493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.009518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.009626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.009651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.009777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.009805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.009952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.009981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.010118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.010145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.010271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.010296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.010417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.010442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.010542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.010568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.010733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.010759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.010892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.010918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.011077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.011132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.011298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.011324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.406 qpair failed and we were unable to recover it. 00:33:06.406 [2024-07-25 23:39:04.011483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.406 [2024-07-25 23:39:04.011508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.011695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.011724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.011841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.011869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.012037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.012069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.012167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.012192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.012363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.012392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.012511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.012536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.012642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.012668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.012785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.012825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.012976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.013002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.013134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.013160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.013312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.013341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.013496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.013521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.013665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.013690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.013799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.013825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.013931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.013961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.014101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.014127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.014261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.014288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.014476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.014501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.014605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.014630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.014805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.014833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.014991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.015017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.015157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.015183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.015284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.015310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.015481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.015507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.015606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.015632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.015802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.015830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.015949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.015975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.016118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.016144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.016267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.016307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.016424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.016449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.016556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.016582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.016732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.016762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.016902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.016928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.017071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.017097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.017224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.017249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.017380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.017405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.017510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.017535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.017675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.017703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.017852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.017878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.018018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.018066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.018178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.018207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.018384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.018409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.018506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.018531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.018701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.018728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.018884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.018911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.019053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.019103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.019242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.019270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.019418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.019443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.019617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.019645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.019788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.019817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.020010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.020040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.020156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.020181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.020305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.020331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.020438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.020472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.020646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.020675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.020839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.020865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.021000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.021025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.021176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.021201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.021375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.021408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.021533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.021559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.021686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.021721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.021878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.021906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.022056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.022088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.022219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.022260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.022374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.022417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.022573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.022598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.022754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.022779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.022913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.022939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.023142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.407 [2024-07-25 23:39:04.023168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.407 qpair failed and we were unable to recover it. 00:33:06.407 [2024-07-25 23:39:04.023273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.023298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.023423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.023449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.023573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.023598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.023730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.023771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.023921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.023949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.024068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.024094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.024228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.024253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.024384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.024409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.024507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.024532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.024679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.024719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.024866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.024894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.025050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.025087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.025194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.025220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.025381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.025406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.025575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.025601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.025761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.025805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.025924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.025962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.026131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.026157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.026273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.026299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.026441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.026466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.026622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.026647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.026755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.026780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.026890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.026916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.027048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.027087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.027190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.027216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.027346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.027372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.027482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.027506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.027643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.027668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.027832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.027860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.028799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.028833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.028969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.029003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.029177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.029207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.029337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.029362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.029501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.029527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.029670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.029695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.029841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.029868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.030003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.030028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.030224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.030250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.030385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.030411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.030553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.030599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.030706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.030734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.030860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.030885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.031017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.031043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.031207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.031236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.031394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.031419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.031560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.031603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.031711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.031740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.031901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.031927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.032036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.032083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.032195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.032223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.032376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.032401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.032503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.032528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.032719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.032748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.032867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.032892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.033023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.033048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.033189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.033217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.033340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.033366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.033521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.033547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.033683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.033711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.033875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.033900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.034031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.034057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.034201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.034231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.034395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.034421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.408 qpair failed and we were unable to recover it. 00:33:06.408 [2024-07-25 23:39:04.034589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.408 [2024-07-25 23:39:04.034617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.034751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.034779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.034922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.034951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.035151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.035177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.035285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.035311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.035453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.035478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.035589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.035615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.035746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.035771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.035886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.035913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.036047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.036080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.036212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.036237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.036344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.036375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.036519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.036546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.036688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.036716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.036844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.036889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.037031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.037073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.037226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.037251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.037385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.037412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.037521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.037563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.037684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.037712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.037866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.037894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.038033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.038068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.038220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.038246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.038377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.038403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.038533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.038558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.038666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.038691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.038801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.038826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.038958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.038983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.039113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.039139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.039271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.039297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.039430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.039456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.039560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.039585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.039701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.039730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.039879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.039905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.040038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.040069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.040203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.040232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.040365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.040390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.040518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.040543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.040679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.040705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.040845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.040870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.041005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.041030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.041186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.041212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.041323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.041348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.041462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.041487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.041590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.041624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.041769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.041795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.041925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.041950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.042089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.042115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.042217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.042242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.042380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.042405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.042514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.042539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.042645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.042670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.042810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.042836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.042987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.043012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.043170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.043196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.043306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.043333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.043471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.043496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.043623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.043649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.043805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.043831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.043935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.043960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.044074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.044103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.044205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.044230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.044394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.044430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.044570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.044595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.044700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.044727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.044858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.044883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.045003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.045028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.045141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.045167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.045303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.409 [2024-07-25 23:39:04.045329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.409 qpair failed and we were unable to recover it. 00:33:06.409 [2024-07-25 23:39:04.045443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.045469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.045634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.045660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.045776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.045802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.045906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.045933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.046046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.046079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.046200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.046226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.046386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.046412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.046519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.046545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.046683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.046708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.046832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.046857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.047016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.047042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.047184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.047209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.048022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.048055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.048250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.048277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.049002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.049035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.049202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.049228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.049336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.049378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.049582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.049610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.049780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.049809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.049964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.049990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.050093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.050120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.050269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.050297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.050450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.050478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.050622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.050650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.050834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.050860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.050989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.051014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.051126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.051152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.051254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.051279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.051378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.051403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.051535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.051560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.052259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.052288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.052463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.052499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.052606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.052632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.052788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.052814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.052974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.053003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.053140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.053167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.053278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.053304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.053467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.053492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.053596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.053621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.053736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.053778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.053920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.053948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.054123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.054149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.054260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.054285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.054416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.054442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.054576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.054604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.054771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.054799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.054945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.054974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.055131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.055157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.055290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.055315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.055418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.055443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.055623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.055665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.055809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.055836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.055972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.055997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.056108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.056134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.056231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.056257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.056392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.056417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.056571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.056611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.056733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.056776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.056936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.056961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.057069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.057094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.057230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.057255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.057410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.057438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.057588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.057617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.057774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.057799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.057912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.410 [2024-07-25 23:39:04.057939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.410 qpair failed and we were unable to recover it. 00:33:06.410 [2024-07-25 23:39:04.058051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.058084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.058221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.058247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.058379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.058404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.058542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.058567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.058753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.058781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.058926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.058955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.059128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.059154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.059287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.059313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.059452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.059477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.059583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.059609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.059734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.059763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.059892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.059918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.060026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.060052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.060159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.060184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.060310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.060336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.060497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.060526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.060699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.060727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.060844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.060873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.061005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.061031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.061158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.061184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.061338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.061363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.411 qpair failed and we were unable to recover it. 00:33:06.411 [2024-07-25 23:39:04.061560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.411 [2024-07-25 23:39:04.061588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.061803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.061831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.061949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.061977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.062140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.062167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.062281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.062306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.062417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.062443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.062608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.062634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.062783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.062812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.062950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.062977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.063101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.063127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.063241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.063266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.063400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.063446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.063576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.063635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.063821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.063850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.063961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.063989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.064117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.064144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.064259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.064290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.064458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.064485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.064627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.064673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.064829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.064855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.065026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.065052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.065203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.065229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.065337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.065362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.065493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.065519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.065664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.065693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.065843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.065868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.065975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.066000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.412 qpair failed and we were unable to recover it. 00:33:06.412 [2024-07-25 23:39:04.066144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.412 [2024-07-25 23:39:04.066170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.066275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.066301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.066440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.066465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.066599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.066625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.066732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.066758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.066887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.066913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.067040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.067075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.067189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.067215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.067311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.067337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.067466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.067492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.067619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.067644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.067752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.067777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.067886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.067914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.068066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.068104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.068222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.068250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.068390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.068417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.068548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.068580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.068720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.068747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.068854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.068881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.069016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.069045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.069165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.069191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.069353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.069379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.069487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.069514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.069683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.069709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.069843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.069869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.070011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.070037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.070180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.070207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.070314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.070340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.070447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.070473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.070590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.070620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.070758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.070785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.070908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.070946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.071082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.071110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.413 [2024-07-25 23:39:04.071243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.413 [2024-07-25 23:39:04.071269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.413 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.071394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.071424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.071540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.071570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.071681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.071709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.071881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.071909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.072021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.072050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.072188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.072213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.072319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.072363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.072546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.072571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.072676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.072706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.072862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.072910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.073071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.073115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.073251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.073277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.073434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.073481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.073601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.073631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.073807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.073834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.073978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.074006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.074145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.074172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.074284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.074312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.074464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.074491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.074650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.074679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.074793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.074824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.075017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.075070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.075234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.075261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.075426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.075479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.075662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.075711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.075867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.075897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.076051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.076084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.076245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.076271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.076394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.076437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.076587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.076616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.076782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.076833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.076990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.077016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.077191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.077219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.414 qpair failed and we were unable to recover it. 00:33:06.414 [2024-07-25 23:39:04.077410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.414 [2024-07-25 23:39:04.077438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.077582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.077610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.077787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.077812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.077917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.077947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.078134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.078161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.078284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.078312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.078435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.078463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.078577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.078605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.078770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.078800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.078956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.078983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.079127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.079154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.079306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.079335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.079505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.079549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.079707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.079760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.079893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.079920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.080078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.080104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.080261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.080286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.080420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.080449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.080613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.080657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.080801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.080829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.080958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.080986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.081144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.081170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.081299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.081342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.081465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.081509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.081667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.081711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.081841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.081867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.081965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.081991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.082120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.082147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.082282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.082308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.082454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.082480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.082640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.082679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.082822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.082850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.082960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.082987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.083151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.083197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.083350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.083403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.083556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.083599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.083735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.083771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.083903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.083928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.084113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.084141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.084279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.084321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.084509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.084552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.084694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.415 [2024-07-25 23:39:04.084720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.415 qpair failed and we were unable to recover it. 00:33:06.415 [2024-07-25 23:39:04.084826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.084851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.085011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.085041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.085176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.085220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.085341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.085386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.085537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.085581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.085711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.085738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.085878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.085905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.086040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.086072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.086217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.086242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.086441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.086482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.086629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.086657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.086805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.086833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.086964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.086990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.087120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.087146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.087271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.087301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.087451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.087479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.087602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.087630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.087737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.087765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.087924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.087953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.088121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.088147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.088272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.088298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.088440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.088482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.088626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.088654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.088797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.088826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.088970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.088996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.089107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.089133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.089272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.089297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.089435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.089460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.089649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.089677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.089802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.089830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.089986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.090011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.090112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.090138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.090277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.090302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.090441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.090466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.090586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.090612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.090735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.416 [2024-07-25 23:39:04.090763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.416 qpair failed and we were unable to recover it. 00:33:06.416 [2024-07-25 23:39:04.090916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.090944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.417 [2024-07-25 23:39:04.091078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.091104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.417 [2024-07-25 23:39:04.091238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.091264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.417 [2024-07-25 23:39:04.091402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.091428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.417 [2024-07-25 23:39:04.091534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.091576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.417 [2024-07-25 23:39:04.091716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.091744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.417 [2024-07-25 23:39:04.091880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.091912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.417 [2024-07-25 23:39:04.092019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.092046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.417 [2024-07-25 23:39:04.092194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.092235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.417 [2024-07-25 23:39:04.092380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.092411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.417 [2024-07-25 23:39:04.092584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.092613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.417 [2024-07-25 23:39:04.092738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.092769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.417 [2024-07-25 23:39:04.092964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.093002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.417 [2024-07-25 23:39:04.093149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.093178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.417 [2024-07-25 23:39:04.093309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.093336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.417 [2024-07-25 23:39:04.093505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.093531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.417 [2024-07-25 23:39:04.093635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.093661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.417 [2024-07-25 23:39:04.093773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.417 [2024-07-25 23:39:04.093799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.417 qpair failed and we were unable to recover it. 00:33:06.699 [2024-07-25 23:39:04.093957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.093983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.094117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.094142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.094275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.094303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.094488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.094536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.094677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.094705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.094850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.094879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.095020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.095049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.095192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.095218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.095376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.095404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.095510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.095538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.095697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.095726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.095875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.095903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.096025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.096053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.096185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.096211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.096339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.096364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.096482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.096536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.096650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.096679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.096851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.096880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.097001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.097033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.097191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.097232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.097369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.097401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.097554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.097599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.097750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.097794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.097905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.097931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.098067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.098094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.098198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.098224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.098344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.098372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.098585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.098613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.098740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.098782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.098904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.098932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.099122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.099148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.700 [2024-07-25 23:39:04.099262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.700 [2024-07-25 23:39:04.099288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.700 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.099404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.099430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.099585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.099614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.099788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.099816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.099943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.099968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.100085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.100111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.100246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.100272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.100404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.100429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.100548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.100578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.100696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.100724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.100851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.100879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.101012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.101040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.101180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.101218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.101335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.101363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.101520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.101549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.101690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.101722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.101840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.101869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.102015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.102041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.102181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.102207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.102342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.102368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.102522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.102550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.102665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.102693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.102808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.102836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.102964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.102989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.103162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.103188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.103321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.103346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.103499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.103528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.103687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.103726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.103848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.103876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.104026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.104051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.104193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.104219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.104311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.104337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.701 qpair failed and we were unable to recover it. 00:33:06.701 [2024-07-25 23:39:04.104469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.701 [2024-07-25 23:39:04.104494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.104625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.104651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.104779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.104804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.104937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.104993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.105131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.105159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.105305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.105331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.105481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.105514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.105678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.105709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.105855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.105884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.105996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.106025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.106191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.106221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.106360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.106387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.106519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.106565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.106670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.106696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.106799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.106825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.106958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.106984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.107120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.107147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.107259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.107285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.107417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.107443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.107579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.107605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.107746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.107772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.107903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.107928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.108036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.108070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.108184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.108211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.108363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.108406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.108590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.108619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.108767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.108792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.108899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.108926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.109048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.109083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.109272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.109316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.109445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.109488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.702 qpair failed and we were unable to recover it. 00:33:06.702 [2024-07-25 23:39:04.109619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.702 [2024-07-25 23:39:04.109650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.109807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.109832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.109963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.109989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.110141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.110171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.110284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.110312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.110460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.110488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.110634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.110662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.110807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.110835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.111007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.111035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.111173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.111198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.111372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.111401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.111515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.111544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.111671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.111713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.111857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.111885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.112075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.112133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.112262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.112301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.112451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.112496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.112622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.112667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.112791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.112836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.112967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.112993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.113121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.113151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.113324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.113367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.113482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.113526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.113662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.113704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.113807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.113833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.113944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.113971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.114141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.114168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.114303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.114329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.114432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.114457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.114566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.114593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.114708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.114734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.703 [2024-07-25 23:39:04.114833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.703 [2024-07-25 23:39:04.114858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.703 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.114986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.115012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.115200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.115227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.115353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.115380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.115512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.115538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.115644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.115669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.115799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.115825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.115937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.115964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.116074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.116107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.116240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.116265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.116405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.116434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.116546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.116574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.116694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.116722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.116875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.116900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.117014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.117039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.117196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.117222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.117353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.117382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.117485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.117513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.117667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.117695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.117862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.117890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.118033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.118069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.118222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.118248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.118393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.118422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.118565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.118593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.118739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.118767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.118958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.119010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.119152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.119191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.119362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.119393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.119545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.119573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.119720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.119749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.119870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.119899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.120052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.704 [2024-07-25 23:39:04.120086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.704 qpair failed and we were unable to recover it. 00:33:06.704 [2024-07-25 23:39:04.120234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.120259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.120381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.120409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.120581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.120609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.120794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.120843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.120975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.121000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.121132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.121158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.121289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.121314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.121496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.121524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.121702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.121749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.121901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.121926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.122025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.122049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.122193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.122218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.122355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.122380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.122526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.122554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.122671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.122699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.122874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.122902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.123043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.123080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.123231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.123256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.123410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.123438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.123557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.123585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.123735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.123767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.123952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.124007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.124160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.124188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.124311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.124355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.124501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.124548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.124737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.705 [2024-07-25 23:39:04.124766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.705 qpair failed and we were unable to recover it. 00:33:06.705 [2024-07-25 23:39:04.124916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.124942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.125100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.125127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.125253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.125282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.125458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.125500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.125615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.125643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.125820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.125846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.125980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.126006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.126141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.126167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.126325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.126367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.126472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.126499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.126632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.126659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.126795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.126821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.126928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.126954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.127056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.127104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.127251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.127283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.127425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.127455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.127604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.127633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.127756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.127784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.127928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.127956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.128081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.128108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.128233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.128277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.128450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.128477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.128577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.128604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.128768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.128794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.128922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.128948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.129135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.129184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.129314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.129359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.129487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.129530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.129638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.129664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.129824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.129849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.129981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.130006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.706 qpair failed and we were unable to recover it. 00:33:06.706 [2024-07-25 23:39:04.130160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.706 [2024-07-25 23:39:04.130190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.130352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.130378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.130517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.130543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.130693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.130726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.130898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.130926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.131049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.131123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.131275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.131305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.131525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.131552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.131711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.131736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.131839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.131864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.131993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.132019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.132171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.132198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.132334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.132360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.132483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.132527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.132736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.132783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.132942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.132968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.133149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.133193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.133364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.133391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.133545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.133589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.133697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.133723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.133859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.133886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.134024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.134049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.134220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.134249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.134419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.134447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.134612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.134660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.134805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.134833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.134986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.135014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.135198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.135237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.135408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.135439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.135614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.135643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.135813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.135848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.707 [2024-07-25 23:39:04.135995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.707 [2024-07-25 23:39:04.136025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.707 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.136191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.136217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.136403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.136431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.136556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.136597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.136751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.136780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.136940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.136965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.137074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.137100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.137234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.137259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.137409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.137437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.137611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.137639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.137779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.137831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.137977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.138002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.138141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.138167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.138301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.138326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.138550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.138579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.138724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.138752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.138896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.138924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.139045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.139088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.139236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.139262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.139410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.139435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.139620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.139649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.139790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.139818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.139971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.139997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.140149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.140175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.140282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.140307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.140463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.140491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.140700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.140733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.140911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.140939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.141079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.141106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.141235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.141261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.141375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.141402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.141578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.708 [2024-07-25 23:39:04.141607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.708 qpair failed and we were unable to recover it. 00:33:06.708 [2024-07-25 23:39:04.141771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.141797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.141980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.142008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.142197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.142223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.142351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.142376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.142505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.142546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.142662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.142690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.142830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.142858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.142979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.143007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.143188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.143227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.143369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.143399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.143526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.143555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.143694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.143724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.143867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.143895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.144051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.144087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.144225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.144251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.144374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.144402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.144546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.144578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.144698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.144727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.144878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.144906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.145016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.145044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.145170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.145195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.145341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.145368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.145480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.145509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.145653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.145683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.145872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.145930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.146074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.146103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.146235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.146261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.146383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.146412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.146533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.146559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.146664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.146691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.146809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.146838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.709 [2024-07-25 23:39:04.146973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.709 [2024-07-25 23:39:04.147000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.709 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.147138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.147168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.147344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.147373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.147521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.147549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.147734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.147763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.147882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.147911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.148022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.148049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.148246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.148274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.148375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.148402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.148506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.148532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.148674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.148701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.148836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.148864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.148998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.149024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.149164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.149190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.149344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.149372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.149513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.149541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.149684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.149713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.149859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.149891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.150038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.150075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.150230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.150258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.150435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.150465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.150612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.150642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.150819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.150848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.150972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.150997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.151140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.151170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.151276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.151302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.151499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.151528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.151667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.151709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.151853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.151881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.152011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.152037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.152152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.152178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.152338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.710 [2024-07-25 23:39:04.152401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.710 qpair failed and we were unable to recover it. 00:33:06.710 [2024-07-25 23:39:04.152549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.152597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.152745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.152773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.152909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.152937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.153144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.153184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.153333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.153361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.153543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.153588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.153715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.153759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.153869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.153896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.154055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.154091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.154263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.154291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.154413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.154439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.154577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.154602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.154765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.154801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.154958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.154983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.155087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.155113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.155246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.155271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.155426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.155454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.155608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.155637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.155767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.155808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.155953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.155981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.156139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.156165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.156269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.156294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.156424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.156452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.156620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.156648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.156758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.156786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.156955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.156983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.711 [2024-07-25 23:39:04.157168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.711 [2024-07-25 23:39:04.157194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.711 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.157338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.157376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.157503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.157533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.157706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.157737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.157897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.157923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.158033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.158066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.158221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.158249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.158399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.158428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.158573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.158602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.158742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.158773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.158927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.158954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.159067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.159094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.159253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.159278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.159403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.159435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.159583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.159611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.159729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.159757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.159904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.159949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.160087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.160114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.160305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.160350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.160506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.160548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.160698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.160741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.160876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.160901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.161033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.161067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.161209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.161235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.161397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.161425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.161637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.161690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.161835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.161863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.162038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.162076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.162257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.162286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.162432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.162460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.162579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.162607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.162727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.712 [2024-07-25 23:39:04.162756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.712 qpair failed and we were unable to recover it. 00:33:06.712 [2024-07-25 23:39:04.162902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.162930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.163052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.163120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.163255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.163280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.163432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.163462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.163606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.163634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.163764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.163805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.163978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.164006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.164138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.164164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.164293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.164327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.164437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.164466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.164583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.164611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.164779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.164808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.164914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.164943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.165082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.165137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.165277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.165304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.165430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.165458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.165604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.165632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.165798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.165853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.165969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.165996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.166114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.166140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.166238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.166264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.166389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.166417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.166584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.166613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.166832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.166860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.167003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.167031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.167190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.167216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.167329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.167356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.167476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.167504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.167630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.167671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.167785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.167814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.167960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.167988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.713 qpair failed and we were unable to recover it. 00:33:06.713 [2024-07-25 23:39:04.168114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.713 [2024-07-25 23:39:04.168141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.168249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.168274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.168397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.168425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.168589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.168618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.168760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.168792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.168926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.168951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.169053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.169084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.169211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.169236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.169346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.169387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.169533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.169562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.169729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.169757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.169878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.169906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.170057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.170087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.170186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.170212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.170358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.170386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.170534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.170563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.170705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.170732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.170882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.170910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.171050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.171109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.171253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.171280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.171418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.171444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.171571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.171613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.171747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.171800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.171937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.171962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.172133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.172160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.172271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.172296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.172428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.172454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.172613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.172638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.172774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.172799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.172938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.172963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.173072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.173107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.714 qpair failed and we were unable to recover it. 00:33:06.714 [2024-07-25 23:39:04.173215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.714 [2024-07-25 23:39:04.173241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.173375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.173400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.173546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.173574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.173713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.173741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.173884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.173912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.174034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.174064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.174224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.174249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.174437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.174464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.174671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.174699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.174866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.174909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.175071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.175123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.175289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.175315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.175440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.175469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.175614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.175641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.175806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.175831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.175998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.176027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.176177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.176203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.176334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.176360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.176493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.176538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.176676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.176704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.176857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.176885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.177029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.177085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.177240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.177267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.177431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.177460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.177651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.177703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.177814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.177842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.178015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.178054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.178178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.178206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.178342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.178370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.178527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.178553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.178748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.178794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.178942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.715 [2024-07-25 23:39:04.178970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.715 qpair failed and we were unable to recover it. 00:33:06.715 [2024-07-25 23:39:04.179129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.179168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.179323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.179352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.179525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.179553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.179658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.179686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.179857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.179889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.180066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.180111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.180282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.180308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.180464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.180517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.180653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.180678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.180782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.180808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.180971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.180997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.181150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.181194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.181346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.181389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.181597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.181649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.181803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.181845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.181980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.182005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.182163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.182193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.182336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.182365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.182532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.182561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.182761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.182814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.182958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.182986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.183140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.183166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.183286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.183320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.183432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.183461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.183598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.183626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.183803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.183854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.183969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.716 [2024-07-25 23:39:04.183997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.716 qpair failed and we were unable to recover it. 00:33:06.716 [2024-07-25 23:39:04.184159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.184185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.184341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.184387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.184572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.184615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.184812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.184868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.185066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.185092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.185252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.185296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.185451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.185481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.185767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.185828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.185962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.185988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.186147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.186191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.186340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.186383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.186566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.186612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.186770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.186796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.186933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.186959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.187118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.187148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.187294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.187337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.187446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.187473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.187656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.187705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.187842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.187868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.187979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.188006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.188190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.188234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.188392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.188436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.188596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.188627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.188786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.188812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.188956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.188994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.189127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.189157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.189308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.189337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.189511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.189541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.189688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.189718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.189851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.189877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.190042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.717 [2024-07-25 23:39:04.190074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.717 qpair failed and we were unable to recover it. 00:33:06.717 [2024-07-25 23:39:04.190178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.190205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.190336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.190361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.190519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.190547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.190784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.190839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.190992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.191022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.191195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.191222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.191409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.191437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.191584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.191613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.191737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.191767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.191913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.191941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.192071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.192096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.192236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.192263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.192447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.192501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.192649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.192679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.192811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.192837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.192980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.193006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.193161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.193187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.193295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.193321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.193489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.193518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.193634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.193663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.193806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.193835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.193976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.194014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.194161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.194189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.194305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.194331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.194485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.194533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.194678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.194722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.194892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.194947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.195108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.195135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.195265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.195291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.195413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.195455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.195588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.718 [2024-07-25 23:39:04.195634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.718 qpair failed and we were unable to recover it. 00:33:06.718 [2024-07-25 23:39:04.195770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.195800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.195972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.196011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.196181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.196212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.196354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.196383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.196503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.196529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.196748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.196795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.196920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.196948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.197093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.197122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.197293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.197322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.197433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.197462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.197612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.197640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.197762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.197793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.197936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.197966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.198097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.198123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.198229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.198255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.198407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.198436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.198644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.198698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.198827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.198853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.198984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.199009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.199143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.199169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.199303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.199329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.199455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.199484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.199625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.199653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.199827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.199857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.200006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.200035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.200165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.200191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.200367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.200394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.200567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.200600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.200728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.200774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.200921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.200949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.201109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.201135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.719 qpair failed and we were unable to recover it. 00:33:06.719 [2024-07-25 23:39:04.201244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.719 [2024-07-25 23:39:04.201271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.201404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.201445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.201596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.201641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.201788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.201816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.201933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.201962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.202115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.202141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.202271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.202296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.202468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.202497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.202632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.202660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.202775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.202803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.202983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.203026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.203175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.203204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.203370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.203396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.203522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.203550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.203696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.203725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.203849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.203877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.204048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.204099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.204205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.204231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.204366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.204391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.204526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.204570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.204740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.204769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.204940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.204969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.205113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.205142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.205275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.205306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.205445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.205470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.205627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.205670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.205847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.205876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.206006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.206031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.206179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.206205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.206384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.206413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.206570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.206595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.720 [2024-07-25 23:39:04.206733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.720 [2024-07-25 23:39:04.206759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.720 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.206863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.206889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.207022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.207046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.207165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.207207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.207363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.207392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.207549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.207574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.207711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.207756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.207881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.207910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.208090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.208115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.208260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.208289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.208445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.208471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.208603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.208628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.208790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.208816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.208964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.208992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.209155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.209181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.209362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.209391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.209517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.209545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.209727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.209752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.209898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.209927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.210108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.210138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.210270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.210295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.210450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.210493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.210636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.210664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.210825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.210852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.210993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.211019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.211161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.211188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.211328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.211354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.211460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.211501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.211618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.211647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.211801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.211826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.211935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.211960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.212125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.721 [2024-07-25 23:39:04.212154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.721 qpair failed and we were unable to recover it. 00:33:06.721 [2024-07-25 23:39:04.212313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.212343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.212484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.212509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.212665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.212694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.212844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.212869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.212981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.213006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.213109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.213136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.213266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.213291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.213469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.213497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.213647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.213676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.213830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.213855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.214039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.214079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.214252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.214278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.214437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.214463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.214636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.214664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.214809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.214850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.215011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.215037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.215161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.215188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.215348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.215373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.215545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.215570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.215701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.215744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.215922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.215950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.216105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.216130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.216293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.216337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.216487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.216513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.216683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.216709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.216858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.216886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.722 [2024-07-25 23:39:04.217033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.722 [2024-07-25 23:39:04.217071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.722 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.217229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.217257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.217368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.217410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.217557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.217586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.217762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.217787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.217921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.217966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.218088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.218118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.218250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.218276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.218387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.218414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.218599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.218627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.218777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.218802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.218963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.219006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.219170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.219196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.219352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.219377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.219531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.219563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.219680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.219710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.219865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.219890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.220030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.220056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.220225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.220271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.220430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.220456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.220616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.220658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.220777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.220806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.220963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.220988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.221125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.221167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.221287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.221316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.221454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.221480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.221638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.221664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.221805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.221833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.221960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.221986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.222121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.222147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.222310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.222339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.723 [2024-07-25 23:39:04.222524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.723 [2024-07-25 23:39:04.222550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.723 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.222689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.222715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.222855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.222882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.222988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.223014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.223200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.223229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.223351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.223380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.223505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.223532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.223665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.223691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.223872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.223901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.224032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.224057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.224216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.224242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.224414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.224440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.224573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.224599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.224730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.224773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.224921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.224949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.225099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.225126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.225233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.225259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.225423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.225452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.225606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.225633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.225764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.225806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.225961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.225990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.226175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.226202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.226352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.226381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.226498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.226530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.226717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.226743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.226843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.226886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.227001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.227031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.227188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.227214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.227310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.227336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.227507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.227533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.227666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.227692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.227822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.724 [2024-07-25 23:39:04.227865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.724 qpair failed and we were unable to recover it. 00:33:06.724 [2024-07-25 23:39:04.228014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.228044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.228184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.228210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.228366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.228392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.228550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.228579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.228731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.228758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.228874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.228900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.229084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.229110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.229264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.229290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.229421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.229446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.229613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.229639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.229767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.229793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.229928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.229969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.230126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.230153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.230257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.230284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.230428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.230470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.230604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.230630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.230763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.230789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.230946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.230972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.231149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.231206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.231369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.231397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.231504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.231531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.231655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.231683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.231804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.231830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.231991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.232016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.232188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.232218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.232371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.232397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.232522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.232548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.232686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.232715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.232866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.232892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.233028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.233054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.233207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.233233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.725 [2024-07-25 23:39:04.233366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.725 [2024-07-25 23:39:04.233397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.725 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.233543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.233572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.233717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.233746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.233894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.233920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.234048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.234083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.234265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.234290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.234423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.234449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.234550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.234574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.234732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.234761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.234891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.234917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.235083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.235109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.235267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.235296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.235422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.235447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.235580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.235605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.235768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.235796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.235916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.235942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.236070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.236096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.236279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.236307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.236459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.236485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.236610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.236635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.236787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.236816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.236937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.236963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.237102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.237129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.237299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.237325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.237482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.237508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.237653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.237682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.237827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.237855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.238040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.238079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.238203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.238230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.238350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.238378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.238559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.238584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.726 [2024-07-25 23:39:04.238688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.726 [2024-07-25 23:39:04.238730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.726 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.238892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.238918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.239050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.239084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.239218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.239260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.239374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.239404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.239567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.239592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.239730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.239773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.239946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.239975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.240155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.240182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.240331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.240364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.240479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.240509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.240664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.240689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.240827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.240852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.240984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.241009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.241170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.241196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.241298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.241323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.241480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.241509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.241669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.241694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.241837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.241863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.242020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.242048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.242213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.242239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.242397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.242440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.242589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.242618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.242781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.242807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.242940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.242965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.243124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.243165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.243327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.727 [2024-07-25 23:39:04.243353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.727 qpair failed and we were unable to recover it. 00:33:06.727 [2024-07-25 23:39:04.243451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.243476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.243650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.243676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.243808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.243834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.243944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.243969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.244167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.244194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.244325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.244351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.244483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.244525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.244670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.244699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.244882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.244908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.245076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.245102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.245237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.245262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.245396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.245422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.245552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.245596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.245778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.245807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.245958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.245983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.246118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.246161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.246348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.246374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.246510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.246535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.246675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.246719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.246881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.246906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.247014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.247039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.247182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.247225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.247397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.247430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.247586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.247611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.247724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.247750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.247884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.247910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.248080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.248106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.248246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.248289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.248441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.248469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.248622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.248647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.248753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.248779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.728 qpair failed and we were unable to recover it. 00:33:06.728 [2024-07-25 23:39:04.248952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.728 [2024-07-25 23:39:04.248977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.249138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.249163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.249277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.249303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.249436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.249464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.249618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.249643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.249750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.249776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.249971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.249997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.250133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.250159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.250333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.250361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.250472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.250501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.250637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.250662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.250818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.250842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.250989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.251018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.251175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.251200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.251340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.251384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.251554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.251583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.251765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.251790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.251970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.251998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.252147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.252189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.252295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.252320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.252433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.252458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.252567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.252593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.252735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.252760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.252865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.252890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.253078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.253108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.253283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.253310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.253443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.253468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.253628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.253671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.253798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.253824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.253927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.253952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.254076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.254119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.254276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.729 [2024-07-25 23:39:04.254305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.729 qpair failed and we were unable to recover it. 00:33:06.729 [2024-07-25 23:39:04.254415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.254459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.254630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.254656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.254791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.254816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.254920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.254945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.255073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.255100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.255227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.255252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.255433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.255462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.255578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.255606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.255754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.255780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.255903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.255929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.256113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.256142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.256298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.256323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.256459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.256502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.256652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.256680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.256835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.256860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.257035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.257076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.257219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.257247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.257406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.257431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.257566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.257592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.257745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.257773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.257933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.257958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.258085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.258111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.258262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.258290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.258471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.258495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.258657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.258686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.258862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.258891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.259072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.259098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.259252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.259281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.259397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.259425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.259583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.259609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.730 [2024-07-25 23:39:04.259744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.730 [2024-07-25 23:39:04.259770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.730 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.259931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.259958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.260092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.260118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.260228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.260253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.260400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.260428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.260582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.260608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.260743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.260769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.260930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.260958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.261084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.261111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.261250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.261280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.261427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.261470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.261622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.261648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.261825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.261853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.261968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.261995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.262146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.262172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.262311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.262336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.262528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.262554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.262659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.262684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.262787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.262813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.262969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.262998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.263157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.263184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.263317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.263359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.263520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.263545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.263720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.263746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.263851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.263876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.264013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.264039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.264157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.264183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.264313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.264339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.264504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.264532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.264684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.264710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.264844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.264870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.265001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.731 [2024-07-25 23:39:04.265042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.731 qpair failed and we were unable to recover it. 00:33:06.731 [2024-07-25 23:39:04.265183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.265208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.265321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.265347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.265484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.265513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.265676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.265702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.265816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.265841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.265995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.266022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.266216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.266243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.266359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.266386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.266524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.266552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.266701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.266726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.266855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.266880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.267076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.267105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.267263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.267289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.267460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.267489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.267599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.267627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.267758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.267784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.267923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.267948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.268113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.268139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.268272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.268298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.268427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.268452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.268556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.268581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.268682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.268707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.268806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.268832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.268982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.269010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.269177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.269203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.269335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.269362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.269494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.269538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.269644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.269670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.269812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.269838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.270003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.270032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.270165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.270191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.732 qpair failed and we were unable to recover it. 00:33:06.732 [2024-07-25 23:39:04.270330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.732 [2024-07-25 23:39:04.270355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.270509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.270538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.270696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.270722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.270822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.270847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.271012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.271038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.271193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.271231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.271373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.271400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.271555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.271599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.271780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.271823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.271959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.271984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.272115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.272142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.272277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.272303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.272480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.272523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.272681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.272729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.272865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.272892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.273051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.273092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.273230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.273256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.273388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.273417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.273556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.273585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.273731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.273760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.273921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.273947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.274080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.274107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.274234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.274262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.274402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.274431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.274610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.274639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.274766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.274794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.274915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.274941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.733 [2024-07-25 23:39:04.275106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.733 [2024-07-25 23:39:04.275132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.733 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.275267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.275292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.275444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.275472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.275639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.275667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.275784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.275811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.275973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.275999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.276132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.276159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.276315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.276356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.276501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.276530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.276685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.276714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.276861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.276889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.277043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.277075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.277248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.277273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.277437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.277466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.277598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.277641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.277785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.277813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.277920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.277946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.278100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.278124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.278258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.278284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.278425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.278452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.278628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.278657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.278807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.278836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.278955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.278980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.279093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.279119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.279259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.279286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.279438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.279465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.734 [2024-07-25 23:39:04.279674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.734 [2024-07-25 23:39:04.279706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.734 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.279852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.279881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.280022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.280048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.280217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.280242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.280392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.280420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.280569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.280599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.280771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.280800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.280972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.281000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.281168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.281194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.281298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.281324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.281456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.281481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.281634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.281663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.281825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.281854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.282002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.282030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.282204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.282230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.282359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.282385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.282501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.282544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.282659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.282687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.282818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.282863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.283006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.283034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.283221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.283247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.283378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.283422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.283565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.283593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.283717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.283743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.283872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.283901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.284030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.284055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.284226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.284252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.284383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.284413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.284556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.284584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.284734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.284762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.284909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.284938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.285099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.735 [2024-07-25 23:39:04.285126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.735 qpair failed and we were unable to recover it. 00:33:06.735 [2024-07-25 23:39:04.285230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.285256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.285388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.285414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.285518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.285544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.285741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.285769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.285901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.285927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.286057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.286090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.286226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.286255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.286434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.286460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.286613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.286647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.286795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.286823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.286950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.286977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.287110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.287137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.287266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.287293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.287427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.287453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.287562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.287588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.287737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.287766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.287917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.287944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.288048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.288083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.288254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.288297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.288449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.288475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.288614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.288640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.288785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.288829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.289021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.289048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.289200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.289227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.289357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.289383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.289518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.289544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.289676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.289720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.289899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.289927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.290048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.290083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.290192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.290217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.290369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.290397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.736 [2024-07-25 23:39:04.290582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.736 [2024-07-25 23:39:04.290607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.736 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.290755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.290784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.290961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.290989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.291135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.291161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.291315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.291343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.291469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.291498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.291677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.291703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.291803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.291828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.291989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.292017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.292179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.292205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.292332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.292373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.292539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.292565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.292693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.292719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.292826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.292851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.293020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.293075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.293255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.293283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.293430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.293459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.293573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.293607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.293757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.293783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.293940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.293983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.294097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.294127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.294285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.294311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.294449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.294492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.294638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.294667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.294826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.294852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.294983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.295008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.295165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.295195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.295327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.295352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.295490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.295516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.295700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.295729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.295880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.295905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.296020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.737 [2024-07-25 23:39:04.296046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.737 qpair failed and we were unable to recover it. 00:33:06.737 [2024-07-25 23:39:04.296156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.296181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.296314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.296340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.296452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.296477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.296674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.296703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.296855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.296880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.296991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.297016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.297187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.297214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.297339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.297364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.297478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.297503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.297663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.297691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.297847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.297872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.298008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.298048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.298222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.298248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.298387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.298412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.298546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.298572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.298679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.298704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.298807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.298833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.298932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.298958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.299111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.299154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.299321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.299348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.299454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.299481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.299605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.299634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.299767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.299793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.299930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.299956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.300138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.300168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.300329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.300361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.300497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.300523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.300682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.300713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.300839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.300865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.300998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.301025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.301220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.301251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.301407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.738 [2024-07-25 23:39:04.301432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.738 qpair failed and we were unable to recover it. 00:33:06.738 [2024-07-25 23:39:04.301543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.301569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.301722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.301751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.301907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.301932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.302102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.302132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.302290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.302316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.302421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.302447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.302583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.302610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.302748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.302777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.302904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.302929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.303030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.303056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.303202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.303231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.303381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.303406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.303583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.303612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.303727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.303754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.303932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.303957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.304071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.304114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.304271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.304297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.304455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.304481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.304660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.304689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.304838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.304867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.305001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.305027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.305157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.305183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.305324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.305350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.305549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.305574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.305684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.305710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.305844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.305869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.306024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.306049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.306206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.306234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.306407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.306436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.306585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.306611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.306751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.739 [2024-07-25 23:39:04.306791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.739 qpair failed and we were unable to recover it. 00:33:06.739 [2024-07-25 23:39:04.306961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.306989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.307117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.307144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.307249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.307279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.307437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.307466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.307599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.307625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.307753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.307779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.307916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.307943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.308120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.308146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.308250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.308274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.308409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.308438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.308623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.308649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.308803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.308831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.308973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.309002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.309138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.309165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.309297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.309323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.309480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.309508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.309642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.309667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.309805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.309831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.309992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.310021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.310183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.310208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.310338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.310364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.310546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.310602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.310730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.310757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.310889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.310914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.311105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.311132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.311298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.311324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.311480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.740 [2024-07-25 23:39:04.311508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.740 qpair failed and we were unable to recover it. 00:33:06.740 [2024-07-25 23:39:04.311678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.311706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.311837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.311864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.312015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.312042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.312178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.312206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.312345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.312371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.312470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.312496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.312643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.312671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.312856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.312882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.313073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.313102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.313219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.313248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.313387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.313413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.313547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.313573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.313689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.313718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.313896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.313923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.314038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.314090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.314259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.314291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.314444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.314471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.314585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.314610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.314778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.314803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.314976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.315002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.315136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.315162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.315291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.315317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.315474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.315501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.315606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.315647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.315830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.315858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.315983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.316008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.316141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.316167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.316340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.316366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.316466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.316493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.316600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.316626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.316821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.316846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.316980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.741 [2024-07-25 23:39:04.317006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.741 qpair failed and we were unable to recover it. 00:33:06.741 [2024-07-25 23:39:04.317180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.317210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.317330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.317359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.317490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.317515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.317641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.317665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.317803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.317830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.317975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.318001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.318113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.318140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.318296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.318322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.318424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.318449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.318584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.318609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.318791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.318833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.319022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.319049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.319194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.319223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.319369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.319398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.319545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.319570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.319705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.319747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.319909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.319934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.320040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.320071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.320206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.320231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.320363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.320389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.320521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.320547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.320680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.320723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.320831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.320859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.321015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.321045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.321157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.321183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.321347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.321376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.321527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.321551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.321690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.321732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.321880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.321909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.322029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.322053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.322191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.322217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.742 [2024-07-25 23:39:04.322345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.742 [2024-07-25 23:39:04.322389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.742 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.322548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.322573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.322731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.322760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.322911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.322943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.323097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.323123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.323254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.323280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.323405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.323432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.323592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.323617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.323747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.323789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.323932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.323960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.324143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.324168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.324322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.324351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.324465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.324494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.324649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.324675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.324784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.324809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.324971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.324996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.325104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.325129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.325263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.325289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.325488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.325514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.325680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.325706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.325881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.325910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.326021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.326048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.326212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.326238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.326371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.326396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.326579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.326607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.326765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.326790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.326895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.326921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.327074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.327102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.327228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.327253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.327367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.327393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.327494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.327518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.327653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.743 [2024-07-25 23:39:04.327679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.743 qpair failed and we were unable to recover it. 00:33:06.743 [2024-07-25 23:39:04.327785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.327833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.327991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.328016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.328160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.328186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.328351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.328393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.328588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.328641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.328788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.328814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.328918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.328943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.329104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.329133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.329281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.329306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.329416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.329442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.329607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.329632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.329738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.329764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.329902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.329928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.330069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.330094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.330244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.330270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.330430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.330455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.330620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.330645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.330801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.330826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.330934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.330960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.331100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.331126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.331269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.331294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.331404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.331429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.331558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.331584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.331746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.331770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.331924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.331953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.332094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.332123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.332248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.332273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.332397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.332435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.332606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.332635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.332814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.332840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.333018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.333047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.744 [2024-07-25 23:39:04.333182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.744 [2024-07-25 23:39:04.333207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.744 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.333350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.333375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.333486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.333530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.333704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.333732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.333889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.333914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.334042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.334090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.334250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.334276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.334415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.334440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.334545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.334571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.334705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.334731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.334914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.334939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.335096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.335121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.335262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.335288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.335418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.335443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.335637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.335691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.335841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.335870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.335990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.336016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.336159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.336184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.336320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.336362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.336512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.336538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.336651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.336677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.336788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.336814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.336970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.336995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.337178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.337207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.337359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.337388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.337518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.337544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.337671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.337697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.745 [2024-07-25 23:39:04.337830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-07-25 23:39:04.337856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.745 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.338047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.338080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.338185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.338229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.338392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.338418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.338519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.338545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.338676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.338702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.338826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.338855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.339012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.339038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.339167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.339223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.339381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.339428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.339604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.339631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.339788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.339847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.339993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.340021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.340172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.340198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.340341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.340367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.340492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.340519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.340646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.340671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.340828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.340854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.340996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.341023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.341188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.341214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.341343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.341387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.341531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.341559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.341711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.341737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.341842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.341868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.342052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.342084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.342215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.342241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.342369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.342412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.342586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.342613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.342766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.342792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.342924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.342950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.343113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.343141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.343293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-07-25 23:39:04.343319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.746 qpair failed and we were unable to recover it. 00:33:06.746 [2024-07-25 23:39:04.343458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.343503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.343653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.343681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.343833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.343858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.343967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.343992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.344214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.344253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.344396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.344423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.344526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.344552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.344692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.344717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.344874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.344900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.345044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.345079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.345182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.345208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.345318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.345344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.345482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.345509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.345638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.345726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.345852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.345877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.346013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.346038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.346156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.346183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.346282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.346307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.346424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.346449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.346583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.346608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.346737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.346762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.346897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.346940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.347081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.347147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.347292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.347319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.347446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.347472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.347598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.347626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.347784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.347810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.347946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.347990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.348145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.348171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.348276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.348301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.348401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.348427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.348533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-07-25 23:39:04.348560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.747 qpair failed and we were unable to recover it. 00:33:06.747 [2024-07-25 23:39:04.348718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.348744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.348891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.348919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.349069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.349097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.349257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.349282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.349442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.349506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.349644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.349672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.349824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.349850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.349999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.350027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.350203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.350242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.350356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.350383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.350486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.350512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.350642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.350667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.350768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.350798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.350897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.350922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.351089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.351116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.351277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.351304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.351484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.351513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.351681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.351709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.351861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.351886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.352003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.352042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.352228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.352255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.352386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.352411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.352514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.352539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.352733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.352792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.352914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.352940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.353086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.353113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.353313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.353342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.353503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.353529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.353639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.353682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.353891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.353945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.354105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.748 [2024-07-25 23:39:04.354132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.748 qpair failed and we were unable to recover it. 00:33:06.748 [2024-07-25 23:39:04.354266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.354309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.354457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.354487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.354670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.354696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.354863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.354892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.355036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.355072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.355225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.355252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.355391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.355436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.355593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.355619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.355784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.355815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.355989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.356018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.356183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.356210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.356313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.356339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.356441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.356468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.356652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.356682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.356839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.356865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.356973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.356999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.357195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.357223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.357335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.357360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.357487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.357512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.357680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.357709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.357831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.357856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.357988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.358013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.358152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.358179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.358282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.358307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.358437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.358463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.358593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.358624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.358800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.358827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.358933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.358959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.359095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.359124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.359283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.359309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.359483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.359512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.359658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.749 [2024-07-25 23:39:04.359688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.749 qpair failed and we were unable to recover it. 00:33:06.749 [2024-07-25 23:39:04.359846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.359871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.359996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.360037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.360176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.360201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.360310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.360335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.360471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.360496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.360649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.360677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.360807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.360832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.360960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.360984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.361171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.361200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.361330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.361355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.361483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.361509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.361647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.361673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.361806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.361832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.361987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.362015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.362190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.362215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.362324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.362349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.362477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.362506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.362630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.362658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.362837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.362862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.362969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.363010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.363215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.363240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.363371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.363397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.363571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.363599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.363761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.363785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.363923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.363948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.364064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.364108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.364259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.364286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.364400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.364442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.364547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.364572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.364685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.364710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.364849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.750 [2024-07-25 23:39:04.364873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.750 qpair failed and we were unable to recover it. 00:33:06.750 [2024-07-25 23:39:04.364981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.365007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.365184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.365213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.365345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.365369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.365499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.365525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.365698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.365726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.365879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.365904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.366012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.366036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.366163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.366202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.366360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.366388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.366528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.366572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.366716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.366744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.366871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.366898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.367069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.367113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.367232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.367262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.367395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.367422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.367551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.367577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.367749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.367775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.367933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.367959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.368072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.368098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.368224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.368251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.368415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.368441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.368550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.368576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.368709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.368735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.368850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.368876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.369009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.369051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.369183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.369229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.369362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.751 [2024-07-25 23:39:04.369388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.751 qpair failed and we were unable to recover it. 00:33:06.751 [2024-07-25 23:39:04.369565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.369594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.369771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.369799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.369950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.369976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.370106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.370131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.370293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.370335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.370495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.370522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.370705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.370734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.370909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.370937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.371093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.371119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.371246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.371271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.371502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.371558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.371716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.371742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.371859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.371884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.372016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.372040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.372149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.372174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.372306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.372331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.372472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.372498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.372596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.372621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.372745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.372770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.372928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.372957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.373106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.373132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.373242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.373267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.373390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.373417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.373551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.373576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.373708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.373733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.373872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.373915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.374049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.374088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.374201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.374227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.374334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.374361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.374497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.374522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.374647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.374673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.374806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.752 [2024-07-25 23:39:04.374832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.752 qpair failed and we were unable to recover it. 00:33:06.752 [2024-07-25 23:39:04.374939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.374965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.375125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.375172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.375294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.375322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.375481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.375506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.375614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.375639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.375777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.375802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.375940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.375971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.376074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.376101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.376230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.376255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.376363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.376389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.376524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.376549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.376657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.376684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.376823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.376850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.376953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.376979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.377148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.377174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.377304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.377329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.377459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.377485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.377633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.377658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.377793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.377818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.377957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.377982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.378120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.378164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.378325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.378350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.378463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.378507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.378654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.378683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.378842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.378868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.378975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.379001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.379137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.379163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.379298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.379324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.379477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.379507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.379695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.379723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.379846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.753 [2024-07-25 23:39:04.379872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.753 qpair failed and we were unable to recover it. 00:33:06.753 [2024-07-25 23:39:04.380034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.380083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.380196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.380224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.380385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.380411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.380535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.380577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.380749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.380777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.380936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.380961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.381142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.381171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.381286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.381315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.381447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.381472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.381597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.381622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.381734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.381759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.381919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.381945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.382044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.382076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.382211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.382237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.382405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.382430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.382562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.382610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.382735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.382763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.382925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.382950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.383074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.383100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.383247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.383276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.383405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.383431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.383540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.383566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.383696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.383723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.383902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.383927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.384078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.384108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.384246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.384275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.384410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.384435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.384606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.384631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.384787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.384816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.384948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.384973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.385107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.385132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.754 [2024-07-25 23:39:04.385300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.754 [2024-07-25 23:39:04.385326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.754 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.385433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.385459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.385630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.385672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.385843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.385871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.386001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.386027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.386146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.386173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.386287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.386312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.386469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.386494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.386673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.386702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.386871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.386900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.387020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.387046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.387209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.387248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.387386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.387428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.387582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.387608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.387714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.387740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.387936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.387989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.388184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.388210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.388321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.388347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.388513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.388539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.388684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.388709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.388814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.388839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.388970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.388996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.389132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.389158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.389287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.389331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.389482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.389510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.389667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.389692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.389793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.389818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.389937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.389966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.390108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.390133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.390232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.390258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.390439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.390466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.390621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.755 [2024-07-25 23:39:04.390647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.755 qpair failed and we were unable to recover it. 00:33:06.755 [2024-07-25 23:39:04.390754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.390780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.390882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.390908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.391070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.391096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.391197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.391223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.391324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.391349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.391457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.391482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.391596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.391640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.391813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.391840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.391978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.392004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.392141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.392168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.392303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.392345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.392502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.392527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.392661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.392705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.392847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.392900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.393034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.393065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.393228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.393253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.393409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.393437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.393612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.756 [2024-07-25 23:39:04.393638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.756 qpair failed and we were unable to recover it. 00:33:06.756 [2024-07-25 23:39:04.393740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.393765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.393918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.393946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.394083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.394109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.394208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.394233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.394360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.394386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.394524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.394550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.394680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.394705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.394869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.394898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.395030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.395055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.395191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.395216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.395349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.395390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.395516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.395542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.395697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.395739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.395850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.395878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.396027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.396052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.396253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.396304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.396472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.396499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.396612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.396637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.396769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.396794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.396950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.396979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.397100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.397125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.397240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.397266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.397374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.397400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.397535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.397560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.397732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.397760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.397899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.397925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.398033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.398062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.398200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.398226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.398384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.398412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.398552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.398579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.398724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.398769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.757 [2024-07-25 23:39:04.398884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.757 [2024-07-25 23:39:04.398913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.757 qpair failed and we were unable to recover it. 00:33:06.758 [2024-07-25 23:39:04.399031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.758 [2024-07-25 23:39:04.399057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.758 qpair failed and we were unable to recover it. 00:33:06.758 [2024-07-25 23:39:04.399172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.758 [2024-07-25 23:39:04.399199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.758 qpair failed and we were unable to recover it. 00:33:06.758 [2024-07-25 23:39:04.399328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.758 [2024-07-25 23:39:04.399355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.758 qpair failed and we were unable to recover it. 00:33:06.758 [2024-07-25 23:39:04.399481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.758 [2024-07-25 23:39:04.399508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:06.758 qpair failed and we were unable to recover it. 00:33:06.758 [2024-07-25 23:39:04.399642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.758 [2024-07-25 23:39:04.399667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.758 qpair failed and we were unable to recover it. 00:33:06.758 [2024-07-25 23:39:04.399800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.758 [2024-07-25 23:39:04.399828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:06.758 qpair failed and we were unable to recover it. 00:33:06.758 [2024-07-25 23:39:04.399966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.399991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.400157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.400201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.400343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.400371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.400498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.400523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.400627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.400658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.400775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.400803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.400980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.401006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.401156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.401185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.401330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.401358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.401492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.401518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.401627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.401652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.401776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.401806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.401935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.401978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.402107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.402133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.402532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.402560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.402708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.402733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.402864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.402889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.403022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.403050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.403206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.403232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.403332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.403357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.403489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.403532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.403637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.403662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.403821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.403846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.403982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.404024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.404184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.404210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.404316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.039 [2024-07-25 23:39:04.404357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.039 qpair failed and we were unable to recover it. 00:33:07.039 [2024-07-25 23:39:04.404504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.404533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.404689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.404714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.404823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.404849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.404959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.404984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.405095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.405121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.405221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.405247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.405409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.405451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.405554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.405579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.405689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.405716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.405845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.405870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.406006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.406031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.406140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.406166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.406363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.406389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.406521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.406546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.406679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.406705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.406841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.406867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.406996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.407021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.407159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.407201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.407343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.407371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.407505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.407531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.407687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.407712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.407850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.407875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.408042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.408072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.408210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.408253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.408395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.408422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.408577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.408603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.408759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.408784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.408882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.408907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.409042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.409083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.409187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.409212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.409369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.409397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.040 qpair failed and we were unable to recover it. 00:33:07.040 [2024-07-25 23:39:04.409545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-07-25 23:39:04.409571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.409696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.409722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.409854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.409882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.410036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.410069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.410249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.410277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.410425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.410453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.410580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.410605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.410734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.410759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.410939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.410967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.411115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.411141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.411284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.411329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.411448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.411477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.411657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.411682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.411786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.411827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.411967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.411995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.412120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.412150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.412283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.412308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.412439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.412468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.412659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.412685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.412818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.412843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.412977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.413002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.413205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.413231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.413340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.413381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.413531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.413559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.413718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.413744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.413841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.413866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.414040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.414073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.414182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.414207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.414314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.414339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.414529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.414554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.414710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.414735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.041 [2024-07-25 23:39:04.414847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-07-25 23:39:04.414889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.041 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.415031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.415068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.415198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.415223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.415355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.415380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.415510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.415538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.415684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.415710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.415842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.415867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.416028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.416075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.416204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.416229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.416370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.416395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.416509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.416537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.416661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.416686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.416846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.416871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.416974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.417000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.417158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.417184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.417316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.417340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.417446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.417470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.417575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.417600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.417734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.417759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.417871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.417895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.418028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.418054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.418191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.418235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.418385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.418412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.418566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.418592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.419225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.419258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.419440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.419474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.419604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.419630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.419738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.419763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.419866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.419892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.420028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.420053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.420228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.420252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.420418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.420446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.042 [2024-07-25 23:39:04.420576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.042 [2024-07-25 23:39:04.420601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.042 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.420712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.420737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.420920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.420949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.421087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.421112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.421243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.421267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.421392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.421418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.421557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.421583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.421687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.421712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.421821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.421846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.422004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.422028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.422166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.422194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.422309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.422337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.422491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.422515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.422690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.422718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.422841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.422871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.423049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.423082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.423190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.423215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.423328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.423354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.423483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.423507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.423618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.423643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.423795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.423826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.423955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.423980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.424094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.424119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.424246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.424274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.424436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.424462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.424596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.424640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.424785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.424812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.424938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.424963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.425102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.425136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.425269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.425310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.425422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.425448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.425576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.425602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.043 [2024-07-25 23:39:04.425752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.043 [2024-07-25 23:39:04.425781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.043 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.425939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.425965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.426120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.426160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.426276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.426304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.426407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.426432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.426573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.426599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.426699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.426724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.426861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.426888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.426999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.427043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.427187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.427216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.427348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.427372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.427482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.427507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.427659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.427686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.427873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.427906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.428071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.428119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.428228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.428254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.428375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.428400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.428531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.428555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.428706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.428733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.428887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.428911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.429023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.429048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.429173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.429198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.429314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.429339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.429478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.429502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.429630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.429658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.429808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.429833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.429962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.044 [2024-07-25 23:39:04.429986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.044 qpair failed and we were unable to recover it. 00:33:07.044 [2024-07-25 23:39:04.430131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.430157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.430293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.430318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.430470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.430496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.430608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.430634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.430807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.430832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.430938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.430964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.431131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.431160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.431293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.431318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.431454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.431478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.431626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.431653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.431809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.431834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.431939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.431965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.432136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.432164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.432299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.432324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.432455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.432480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.432648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.432676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.432826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.432851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.432956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.432981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.433114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.433140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.433283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.433307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.433457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.433484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.433646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.433672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.433807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.433832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.433968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.433994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.434162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.434187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.434307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.434332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.434458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.434484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.434611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.434638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.434792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.434817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.434951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.434981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.435148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.435192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.045 qpair failed and we were unable to recover it. 00:33:07.045 [2024-07-25 23:39:04.435346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.045 [2024-07-25 23:39:04.435372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.435508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.435550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.435729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.435756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.435913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.435938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.436078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.436103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.436229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.436256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.436382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.436407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.436562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.436587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.436707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.436735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.436891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.436915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.437028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.437053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.437195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.437219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.437326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.437351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.437460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.437485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.437586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.437611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.437768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.437792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.437908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.437968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.438125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.438156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.438289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.438315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.438471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.438497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.438662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.438690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.438840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.438866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.439023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.439052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.439186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.439227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.439329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.439355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.439495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.439526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.439718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.439746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.439868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.439894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.440007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.440034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.440176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.440201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.440303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.440327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.046 [2024-07-25 23:39:04.440459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.046 [2024-07-25 23:39:04.440484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.046 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.440585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.440609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.440742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.440768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.440864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.440888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.441055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.441086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.441193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.441218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.441346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.441371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.441509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.441534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.441644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.441669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.441823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.441849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.441954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.441979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.442086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.442118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.442223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.442248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.442353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.442378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.442479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.442503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.442604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.442628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.442757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.442781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.442912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.442936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.443084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.443128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.443274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.443302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.443426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.443451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.443582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.443607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.443774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.443799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.443925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.443949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.444131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.444160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.444298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.444325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.444476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.444500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.444660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.444685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.444868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.444896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.445033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.445066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.445184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.445212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.445345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.445370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.047 qpair failed and we were unable to recover it. 00:33:07.047 [2024-07-25 23:39:04.445478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.047 [2024-07-25 23:39:04.445504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.445661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.445687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.445852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.445877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.446014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.446039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.446168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.446193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.446342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.446370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.446500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.446525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.446682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.446707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.446862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.446900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.447041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.447079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.447234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.447260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.447391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.447417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.447548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.447573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.447713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.447740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.447879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.447905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.448073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.448099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.448227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.448255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.448376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.448404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.448559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.448583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.448716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.448742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.448946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.448973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.449136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.449163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.449272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.449297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.449429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.449455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.449582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.449607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.449743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.449783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.449904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.449934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.450098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.450124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.450236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.450260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.450396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.450422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.450563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.450588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.450696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.048 [2024-07-25 23:39:04.450721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.048 qpair failed and we were unable to recover it. 00:33:07.048 [2024-07-25 23:39:04.450879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.450905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.451036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.451082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.451209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.451234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.451392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.451418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.451615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.451641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.451792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.451826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.451956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.451986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.452117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.452143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.452265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.452291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.452396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.452422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.452557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.452582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.452686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.452712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.452889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.452919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.453079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.453116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.453239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.453263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.453393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.453417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.453549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.453574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.453698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.453740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.453885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.453913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.454044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.454077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.454191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.454217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.454352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.454377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.454479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.454504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.454605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.454631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.454794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.454821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.454985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.455012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.455172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.455204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.455349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.455376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.455530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.455555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.455661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.455687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.455882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.455907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.456045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.049 [2024-07-25 23:39:04.456079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.049 qpair failed and we were unable to recover it. 00:33:07.049 [2024-07-25 23:39:04.456187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.456212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.456364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.456392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.456576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.456602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.456709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.456734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.456848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.456875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.457033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.457067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.457231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.457261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.457385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.457432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.457588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.457613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.457713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.457739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.457870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.457901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.458029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.458055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.458193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.458218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.458319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.458345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.458501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.458526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.458633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.458678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.458895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.458922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.459031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.459057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.459243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.459269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.459399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.459425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.459554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.459579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.459714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.459759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.459912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.459937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.460074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.460100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.460260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.460304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.460473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.460502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.460652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.050 [2024-07-25 23:39:04.460677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.050 qpair failed and we were unable to recover it. 00:33:07.050 [2024-07-25 23:39:04.460786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.460811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.460925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.460950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.461053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.461087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.461262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.461291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.461438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.461466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.461642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.461667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.461801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.461826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.461956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.461981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.462086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.462112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.462248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.462274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.462462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.462489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.462614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.462640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.462796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.462837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.462952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.462980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.463100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.463126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.463260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.463285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.463452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.463495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.463669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.463695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.463833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.463859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.464022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.464073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.464214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.464240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.464366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.464392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.464581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.464607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.464769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.464795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.464927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.464970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.465123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.465153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.465329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.465355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.465485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.465527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.465654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.465682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.465802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.465828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.465934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.051 [2024-07-25 23:39:04.465959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.051 qpair failed and we were unable to recover it. 00:33:07.051 [2024-07-25 23:39:04.466118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.466144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.466285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.466310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.466453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.466478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.466618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.466646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.466805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.466830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.466931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.466956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.467114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.467156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.467353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.467380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.467489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.467533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.467718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.467746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.467924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.467950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.468079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.468108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.468225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.468251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.468387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.468412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.468544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.468586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.468702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.468730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.468881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.468906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.469035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.469084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.469234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.469259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.469417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.469442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.469587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.469615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.469728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.469756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.469918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.469943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.470048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.470083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.470209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.470234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.470364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.470388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.470500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.470524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.470638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.470663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.470797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.470821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.470924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.470949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.471110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.471138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.471323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.052 [2024-07-25 23:39:04.471348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.052 qpair failed and we were unable to recover it. 00:33:07.052 [2024-07-25 23:39:04.471484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.471509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.471609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.471634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.471739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.471764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.471923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.471966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.472092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.472121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.472267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.472292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.472452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.472495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.472666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.472693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.472840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.472865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.472995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.473020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.473181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.473205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.473362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.473393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.473522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.473564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.473712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.473739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.473867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.473891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.474026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.474077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.474231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.474255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.474390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.474414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.474542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.474567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.474696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.474723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.474901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.474926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.475038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.475109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.475240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.475267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.475407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.475434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.475543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.475585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.475783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.475833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.475982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.476007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.476150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.476193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.476307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.476336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.476467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.476492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.476604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.476629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.053 [2024-07-25 23:39:04.476725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.053 [2024-07-25 23:39:04.476750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.053 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.476885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.476911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.477043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.477093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.477249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.477273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.477404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.477429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.477556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.477580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.477686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.477711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.477864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.477894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.478000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.478025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.478161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.478187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.478319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.478344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.478473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.478498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.478631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.478657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.478775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.478814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.478966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.478993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.479107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.479135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.479275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.479302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.479436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.479462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.479596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.479622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.479771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.479798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.480027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.480073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.480225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.480253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.480378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.480407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.480552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.480580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.480698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.480729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.480856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.480882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.481085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.481123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.481245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.481274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.481388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.481415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.481624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.481676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.481823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.481850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.481981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.482009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.054 [2024-07-25 23:39:04.482146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.054 [2024-07-25 23:39:04.482173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.054 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.482336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.482379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.482604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.482656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.482787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.482834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.482945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.482972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.483132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.483158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.483292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.483318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.483481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.483506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.483615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.483641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.483754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.483782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.484002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.484028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.484170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.484197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.484352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.484381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.484488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.484515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.484667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.484695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.484892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.484935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.485077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.485103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.485238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.485263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.485468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.485514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.485628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.485656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.485807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.485834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.485981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.486006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.486126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.486151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.486255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.486280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.486383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.486426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.486597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.486624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.486764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.486804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.055 [2024-07-25 23:39:04.486977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.055 [2024-07-25 23:39:04.487002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.055 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.487107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.487132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.487237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.487267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.487499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.487549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.487703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.487746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.487926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.487984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.488123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.488150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.488306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.488354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.488526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.488580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.488778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.488805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.488934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.488960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.489138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.489182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.489294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.489323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.489495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.489539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.489671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.489696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.489838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.489865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.490010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.490036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.490144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.490169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.490304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.490329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.490457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.490482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.490591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.490616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.490744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.490790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.490896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.490922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.491034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.491066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.491195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.491239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.491397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.491440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.491594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.491623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.491748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.491775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.491908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.491933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.492073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.492106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.492227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.492271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.492457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.056 [2024-07-25 23:39:04.492500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.056 qpair failed and we were unable to recover it. 00:33:07.056 [2024-07-25 23:39:04.492644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.492687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.492860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.492886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.492998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.493024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.493165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.493193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.493364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.493393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.493499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.493526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.493673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.493700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.493923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.493973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.494157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.494183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.494295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.494321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.494481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.494508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.494640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.494667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.494826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.494852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.495013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.495037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.495153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.495177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.495285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.495310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.495511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.495538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.495671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.495713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.495891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.495919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.496072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.496097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.496201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.496226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.496364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.496390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.496620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.496649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.496826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.496854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.496997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.497030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.497189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.497229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.497349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.497376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.497536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.497581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.497768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.497827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.497929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.497955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.498105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.498135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.498312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.057 [2024-07-25 23:39:04.498338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.057 qpair failed and we were unable to recover it. 00:33:07.057 [2024-07-25 23:39:04.498486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.498530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.498693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.498718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.498885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.498912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.499041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.499074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.499214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.499243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.499347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.499375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.499502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.499529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.499650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.499674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.499804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.499830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.499980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.500008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.500182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.500222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.500382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.500413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.500550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.500580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.500743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.500786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.500954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.500980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.501158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.501202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.501449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.501502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.501726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.501775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.501919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.501945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.502086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.502136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.502282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.502310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.502445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.502472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.502598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.502626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.502745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.502773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.502940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.502968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.503119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.503144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.503287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.503313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.503483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.503508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.503727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.503753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.503864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.503892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.504014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.504041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.504202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.058 [2024-07-25 23:39:04.504230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.058 qpair failed and we were unable to recover it. 00:33:07.058 [2024-07-25 23:39:04.504375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.504403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.504521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.504548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.504664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.504691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.504834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.504862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.504977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.505004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.505187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.505212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.505395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.505423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.505581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.505623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.505837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.505865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.506015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.506039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.506183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.506208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.506338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.506366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.506514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.506541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.506653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.506680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.506884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.506947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.507071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.507098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.507231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.507257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.507386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.507430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.507567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.507593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.507753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.507796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.507903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.507929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.508071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.508099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.508227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.508255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.508486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.508538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.508709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.508755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.508919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.508945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.509053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.509085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.509220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.509245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.509368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.509395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.509537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.509564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.509719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.509744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.509908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.059 [2024-07-25 23:39:04.509935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.059 qpair failed and we were unable to recover it. 00:33:07.059 [2024-07-25 23:39:04.510065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.510090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.510250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.510275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.510426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.510454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.510614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.510638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.510797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.510824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.510944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.510971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.511122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.511147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.511278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.511306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.511430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.511458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.511630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.511662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.511810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.511837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.512011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.512039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.512203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.512241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.512405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.512449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.512599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.512643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.512806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.512833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.512991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.513017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.513178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.513226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.513460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.513489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.513745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.513792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.513903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.513929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.514066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.514093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.514215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.514242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.514400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.514425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.514585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.514613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.514728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.514755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.514872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.514899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.515074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.515100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.515228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.515252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.515381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.515409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.515558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.515586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.060 [2024-07-25 23:39:04.515758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.060 [2024-07-25 23:39:04.515785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.060 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.515894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.515921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.516045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.516082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.516237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.516262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.516414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.516442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.516655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.516682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.516836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.516865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.516987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.517015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.517180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.517206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.517341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.517366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.517484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.517511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.517713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.517741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.517888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.517916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.518046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.518077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.518214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.518239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.518345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.518370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.518528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.518556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.518764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.518792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.518936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.518964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.519083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.519124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.519262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.519287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.519421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.519446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.519578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.519608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.519741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.519785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.519955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.519983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.520119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.520145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.520273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.520299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.520425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.520452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.061 [2024-07-25 23:39:04.520593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.061 [2024-07-25 23:39:04.520621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.061 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.520764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.520792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.520973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.520998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.521095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.521121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.521231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.521257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.521369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.521395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.521542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.521570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.521784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.521813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.521925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.521953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.522106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.522132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.522291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.522316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.522464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.522492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.522633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.522661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.522833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.522862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.523013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.523040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.523185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.523211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.523347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.523372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.523499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.523542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.523655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.523691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.523813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.523855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.523985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.524010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.524147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.524173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.524309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.524334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.524483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.524511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.524653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.524681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.524823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.524851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.525022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.525050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.525230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.525255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.525384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.525410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.525560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.525588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.525732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.525760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.525906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.525932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.062 qpair failed and we were unable to recover it. 00:33:07.062 [2024-07-25 23:39:04.526070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.062 [2024-07-25 23:39:04.526096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.526221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.526246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.526402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.526430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.526542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.526570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.526681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.526709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.526849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.526877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.526988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.527016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.527208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.527234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.527387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.527444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.527647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.527690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.527819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.527864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.527974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.528000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.528131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.528162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.528329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.528379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.528540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.528583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.528686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.528713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.528879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.528905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.529043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.529078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.529312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.529340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.529458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.529485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.529603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.529631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.529804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.529832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.529944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.529972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.530128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.530154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.530310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.530350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.530488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.530516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.530656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.530684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.530828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.530855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.530977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.531006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.531176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.531202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.531332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.531357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.531490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.531517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.063 qpair failed and we were unable to recover it. 00:33:07.063 [2024-07-25 23:39:04.531657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.063 [2024-07-25 23:39:04.531685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.531821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.531849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.531994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.532022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.532184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.532210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.532310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.532350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.532539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.532564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.532751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.532779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.532945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.532973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.533139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.533165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.533297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.533323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.533480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.533521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.533642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.533670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.533814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.533841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.533992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.534020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.534163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.534189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.534322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.534350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.534460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.534488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.534632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.534659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.534816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.534842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.534974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.534999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.535133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.535159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.535303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.535331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.535496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.535529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.535675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.535703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.535814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.535842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.535950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.535978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.536146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.536171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.536309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.536334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.536504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.536533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.536669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.536697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.536908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.536935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.537081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.537124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.064 [2024-07-25 23:39:04.537234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.064 [2024-07-25 23:39:04.537259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.064 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.537430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.537459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.537573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.537601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.537724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.537754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.537949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.537988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.538130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.538157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.538310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.538354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.538598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.538646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.538800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.538843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.538959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.538985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.539123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.539154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.539329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.539357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.539472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.539500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.539639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.539664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.539851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.539878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.539998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.540026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.540194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.540220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.540377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.540407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.540533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.540561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.540712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.540741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.540883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.540910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.541054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.541107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.541213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.541238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.541346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.541372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.541524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.541552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.541679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.541720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.541863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.541891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.542015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.542043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.542170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.542195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.542354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.542380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.542534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.542563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.542691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.542734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.065 qpair failed and we were unable to recover it. 00:33:07.065 [2024-07-25 23:39:04.542857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.065 [2024-07-25 23:39:04.542885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.542993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.543020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.543190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.543216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.543360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.543388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.543544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.543590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.543782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.543811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.543956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.543984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.544138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.544164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.544289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.544314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.544427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.544469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.544607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.544633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.544794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.544822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.544947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.544975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.545119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.545145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.545253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.545278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.545379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.545406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.545588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.545616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.545758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.545786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.545965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.545993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.546174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.546200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.546294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.546319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.546463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.546505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.546649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.546677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.546800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.546828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.546966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.546994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.547147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.547173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.547306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.066 [2024-07-25 23:39:04.547331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.066 qpair failed and we were unable to recover it. 00:33:07.066 [2024-07-25 23:39:04.547441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.547466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.547606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.547631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.547776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.547804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.547944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.547972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.548154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.548180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.548285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.548310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.548484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.548511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.548658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.548686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.548806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.548835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.549007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.549034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.549185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.549224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.549392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.549419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.549607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.549649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.549790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.549833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.549998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.550024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.550193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.550237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.550389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.550432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.550586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.550630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.550783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.550826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.550927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.550953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.551089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.551116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.551264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.551308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.551455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.551497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.551658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.551701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.551801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.551827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.551962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.551988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.552145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.552175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.552291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.552319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.552464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.552492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.552664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.552692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.552852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.552877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.067 [2024-07-25 23:39:04.553007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.067 [2024-07-25 23:39:04.553033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.067 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.553155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.553183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.553328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.553356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.553494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.553522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.553664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.553692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.553841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.553869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.554025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.554050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.554165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.554192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.554321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.554346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.554524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.554566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.554710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.554738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.554927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.554955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.555069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.555110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.555259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.555284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.555406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.555434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.555558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.555598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.555742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.555770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.555921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.555949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.556096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.556138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.556243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.556268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.556393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.556421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.556541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.556569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.556744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.556777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.556917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.556945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.557100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.557126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.557240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.557266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.557396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.557425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.557611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.557639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.557806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.557834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.557977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.558006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.558128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.558153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.558287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.558312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.068 [2024-07-25 23:39:04.558489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.068 [2024-07-25 23:39:04.558517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.068 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.558666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.558707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.558847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.558875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.559057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.559090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.559228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.559253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.559395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.559420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.559536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.559564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.559677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.559705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.559847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.559875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.560023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.560052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.560221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.560246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.560377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.560402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.560533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.560577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.560715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.560743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.560853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.560882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.561008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.561034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.561144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.561169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.561301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.561330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.561458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.561485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.561678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.561706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.561813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.561841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.562013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.562040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.562207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.562232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.562336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.562362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.562405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb2470 (9): Bad file descriptor 00:33:07.069 [2024-07-25 23:39:04.562593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.562637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.562762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.562792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.562977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.563006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.563235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.563262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.563492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.563521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.563666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.563695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.563821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.563868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.564013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.069 [2024-07-25 23:39:04.564042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.069 qpair failed and we were unable to recover it. 00:33:07.069 [2024-07-25 23:39:04.564172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.564199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.564312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.564338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.564508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.564536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.564733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.564761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.564874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.564903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.565023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.565051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.565215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.565242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.565397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.565426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.565601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.565629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.565800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.565829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.565972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.566001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.566138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.566164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.566304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.566330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.566507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.566536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.566656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.566684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.566901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.566929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.567115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.567142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.567248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.567274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.567434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.567460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.567611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.567638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.567837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.567865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.568019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.568045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.568184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.568210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.568348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.568374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.568534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.568559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.568743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.568771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.568916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.568945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.569100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.569126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.569351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.569380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.569519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.569547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.569726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.569752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.070 qpair failed and we were unable to recover it. 00:33:07.070 [2024-07-25 23:39:04.569885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.070 [2024-07-25 23:39:04.569910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.570099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.570125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.570260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.570287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.570395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.570422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.570527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.570553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.570687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.570713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.570876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.570918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.571068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.571102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.571262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.571288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.571418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.571442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.571575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.571601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.571737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.571762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.571921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.571945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.572135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.572164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.572311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.572336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.572463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.572488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.572613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.572641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.572823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.572849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.573007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.573035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.573218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.573256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.573396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.573423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.573533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.573559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.573692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.573717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.573824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.573849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.573979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.574005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.574189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.574216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.574327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.574353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.574487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.574513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.574649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.574674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.574844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.574869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.574998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.575040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.071 [2024-07-25 23:39:04.575210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.071 [2024-07-25 23:39:04.575238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.071 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.575345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.575371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.575476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.575501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.575661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.575691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.575856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.575881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.576042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.576075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.576178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.576204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.576336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.576361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.576538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.576566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.576686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.576727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.576858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.576884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.577042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.577093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.577224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.577249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.577357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.577384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.577522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.577548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.577721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.577749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.577913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.577943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.578097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.578126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.578268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.578295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.578470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.578496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.578623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.578666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.578849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.578878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.579010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.579035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.579177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.579203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.579330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.579374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.579521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.579547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.579680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.579722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.072 [2024-07-25 23:39:04.579874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.072 [2024-07-25 23:39:04.579899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.072 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.580027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.580052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.580190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.580232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.580358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.580386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.580514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.580540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.580678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.580703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.580835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.580862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.581071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.581098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.581235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.581260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.581418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.581459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.581638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.581664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.581799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.581844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.582030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.582056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.582179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.582204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.582362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.582406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.582558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.582584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.582717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.582747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.582922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.582951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.583072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.583101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.583267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.583293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.583397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.583422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.583561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.583589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.583710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.583735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.583871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.583895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.584031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.584056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.584198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.584224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.584354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.584397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.584514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.584543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.584718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.584743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.584849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.584873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.585006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.585031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.585182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.073 [2024-07-25 23:39:04.585207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.073 qpair failed and we were unable to recover it. 00:33:07.073 [2024-07-25 23:39:04.585317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.585358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.585539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.585568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.585712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.585737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.585893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.585935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.586066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.586094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.586226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.586250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.586379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.586405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.586510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.586535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.586670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.586694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.586802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.586827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.587003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.587028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.587174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.587199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.587336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.587379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.587493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.587521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.587676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.587702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.587833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.587875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.587989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.588017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.588187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.588213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.588392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.588420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.588567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.588595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.588745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.588771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.588884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.588909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.589094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.589124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.589277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.589303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.589443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.589489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.589661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.589688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.589819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.589845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.589949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.589974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.590111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.590137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.590277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.590302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.590484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.590511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.074 [2024-07-25 23:39:04.590655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.074 [2024-07-25 23:39:04.590683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.074 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.590805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.590832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.590961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.590986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.591131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.591160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.591287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.591312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.591442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.591466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.591626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.591652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.591802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.591827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.591963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.591987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.592115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.592141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.592303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.592327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.592459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.592500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.592648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.592676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.592825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.592849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.592975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.593000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.593188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.593232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.593396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.593424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.593557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.593600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.593729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.593754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.593910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.593936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.594078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.594108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.594231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.594262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.594425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.594451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.594583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.594608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.594765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.594808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.594964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.594991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.595094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.595120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.595306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.595335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.595488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.595514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.595622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.595647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.595776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.595802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.595899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.595924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.075 qpair failed and we were unable to recover it. 00:33:07.075 [2024-07-25 23:39:04.596066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.075 [2024-07-25 23:39:04.596092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.596220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.596257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.596443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.596469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.596620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.596650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.596770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.596801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.596994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.597020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.597146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.597172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.597307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.597351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.597508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.597534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.597648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.597672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.597804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.597829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.597964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.597989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.598091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.598117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.598250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.598276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.598401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.598426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.598531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.598557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.598745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.598773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.598890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.598915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.599054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.599086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.599218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.599243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.599375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.599401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.599585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.599612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.599756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.599787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.599973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.599999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.600149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.600177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.600331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.600358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.600519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.600544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.600693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.600721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.600908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.600937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.601085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.601112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.601270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.601311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.601485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.076 [2024-07-25 23:39:04.601531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.076 qpair failed and we were unable to recover it. 00:33:07.076 [2024-07-25 23:39:04.601681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.601706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.601841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.601884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.602037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.602073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.602226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.602251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.602355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.602381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.602535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.602563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.602690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.602716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.602828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.602853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.602958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.602983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.603085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.603116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.603248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.603273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.603434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.603463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.603622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.603648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.603758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.603783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.603917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.603943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.604071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.604097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.604234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.604260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.604390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.604416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.604514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.604540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.604642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.604668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.604827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.604852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.604983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.605009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.605119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.605145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.605281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.605307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.605440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.605466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.605577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.605618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.605795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.605824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.077 qpair failed and we were unable to recover it. 00:33:07.077 [2024-07-25 23:39:04.605950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.077 [2024-07-25 23:39:04.605976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.606105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.606131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.606286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.606314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.606473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.606499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.606637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.606663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.606824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.606867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.607028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.607054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.607197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.607223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.607383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.607411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.607548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.607574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.607686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.607711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.607902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.607930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.608078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.608105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.608214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.608239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.608376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.608401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.608533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.608559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.608690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.608734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.608892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.608920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.609066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.609093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.609276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.609304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.609428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.609455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.609608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.609634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.609737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.609766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.609926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.609970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.610128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.610155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.610255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.610281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.610438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.610465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.610589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.610615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.610769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.610795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.610965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.610991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.611105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.611132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.078 [2024-07-25 23:39:04.611240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.078 [2024-07-25 23:39:04.611267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.078 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.611405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.611430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.611532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.611558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.611661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.611686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.611818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.611843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.612013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.612038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.612170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.612196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.612304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.612328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.612482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.612508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.612642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.612667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.612796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.612821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.612954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.612979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.613090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.613131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.613314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.613345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.613504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.613530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.613662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.613704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.613822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.613850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.613998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.614024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.614167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.614193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.614330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.614356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.614458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.614483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.614583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.614608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.614765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.614793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.614936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.614962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.615091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.615138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.615283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.615312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.615495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.615521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.615624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.615651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.615787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.615829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.615990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.616016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.616202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.616231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.616396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.616447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.079 [2024-07-25 23:39:04.616631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.079 [2024-07-25 23:39:04.616658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.079 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.616762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.616804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.616949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.616977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.617133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.617161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.617294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.617336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.617493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.617520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.617620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.617645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.617746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.617771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.617876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.617901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.618033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.618064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.618197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.618224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.618369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.618397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.618577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.618603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.618757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.618786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.618956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.618986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.619125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.619152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.619294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.619320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.619521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.619550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.619660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.619686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.619795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.619820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.619984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.620011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.620189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.620215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.620395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.620423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.620627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.620680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.620834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.620860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.620967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.620994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.621154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.621210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.621400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.621427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.621540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.621566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.621724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.621750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.621858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.621883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.080 [2024-07-25 23:39:04.622020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.080 [2024-07-25 23:39:04.622071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.080 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.622299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.622328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.622515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.622541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.622692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.622721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.622866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.622895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.623080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.623107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.623281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.623310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.623441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.623490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.623625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.623657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.623758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.623784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.623931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.623959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.624078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.624104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.624260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.624285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.624533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.624582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.624767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.624793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.624898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.624940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.625104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.625129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.625260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.625285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.625388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.625413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.625565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.625593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.625769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.625794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.625966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.625994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.626172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.626202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.626356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.626381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.626482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.626508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.626645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.626669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.626782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.626807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.626905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.626929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.627065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.627094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.627233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.627259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.627393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.627418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.627611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.627637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.627771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.627796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.081 qpair failed and we were unable to recover it. 00:33:07.081 [2024-07-25 23:39:04.627925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.081 [2024-07-25 23:39:04.627966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.628084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.628114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.628275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.628301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.628474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.628502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.628694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.628739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.628868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.628893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.629026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.629051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.629217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.629246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.629404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.629429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.629533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.629557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.629679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.629706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.629832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.629858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.629964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.629989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.630173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.630202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.630357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.630382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.630520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.630548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.630686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.630712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.630842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.630867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.631040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.631073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.631216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.631244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.631395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.631421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.631553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.631578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.631734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.631764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.631909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.631935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.632077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.632102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.632259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.632286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.632444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.632469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.632652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.632681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.632806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.632833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.632965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.632992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.633137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.633163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.633347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.633374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.633549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.633573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.633749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.633777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.082 qpair failed and we were unable to recover it. 00:33:07.082 [2024-07-25 23:39:04.633900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.082 [2024-07-25 23:39:04.633929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.634090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.634115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.634276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.634317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.634482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.634537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.634663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.634689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.634793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.634818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.634963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.634992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.635148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.635174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.635307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.635348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.635528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.635553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.635663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.635688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.635824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.635850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.636014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.636065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.636208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.636236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.636370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.636396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.636529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.636556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.636692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.636718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.636876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.636919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.637038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.637072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.637198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.637225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.637367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.637393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.637527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.637558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.637662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.637688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.637822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.637848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.638025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.638055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.638220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.638247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.638362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.638388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.638548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.638574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.638745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.638771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.638874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.638900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.083 qpair failed and we were unable to recover it. 00:33:07.083 [2024-07-25 23:39:04.639026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.083 [2024-07-25 23:39:04.639056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.639253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.639279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.639389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.639415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.639581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.639607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.639753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.639781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.639916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.639942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.640086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.640113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.640265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.640291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.640423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.640449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.640649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.640696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.640850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.640876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.641049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.641085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.641201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.641231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.641363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.641389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.641523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.641549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.641721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.641764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.641914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.641940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.642074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.642118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.642249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.642293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.642451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.642477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.642626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.642656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.642774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.642802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.642962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.642991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.643123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.643166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.643288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.643317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.643469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.643495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.643600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.643626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.643812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.643840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.643988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.644014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.644124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.644151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.644260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.644288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.644399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.644429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.644539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.644565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.644731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.644760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.084 [2024-07-25 23:39:04.644941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.084 [2024-07-25 23:39:04.644966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.084 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.645149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.645177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.645351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.645377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.645520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.645547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.645683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.645725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.645878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.645904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.646031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.646057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.646195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.646220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.646374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.646402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.646527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.646554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.646659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.646684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.646824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.646850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.646960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.646985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.647147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.647192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.647336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.647365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.647548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.647573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.647746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.647774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.647953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.647978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.648142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.648168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.648274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.648300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.648431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.648456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.648589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.648614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.648779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.648808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.648924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.648952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.649110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.649135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.649246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.649272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.649420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.649445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.649581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.649606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.649737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.649779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.649956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.649985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.650143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.650170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.650303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.650346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.650505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.650530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.650662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.085 [2024-07-25 23:39:04.650687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.085 qpair failed and we were unable to recover it. 00:33:07.085 [2024-07-25 23:39:04.650817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.650842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.650974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.651000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.651113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.651138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.651296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.651325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.651456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.651484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.651613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.651638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.651849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.651878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.652037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.652069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.652204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.652230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.652396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.652423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.652558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.652585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.652782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.652807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.652999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.653025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.653151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.653176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.653315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.653341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.653448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.653474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.653576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.653601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.653740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.653766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.653875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.653916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.654069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.654097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.654273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.654300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.654402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.654427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.654616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.654644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.654793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.654819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.654948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.654986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.655127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.655171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.655304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.655330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.655509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.655537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.655695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.655723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.655877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.655902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.656013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.656038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.656183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.656225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.656392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.656417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.656523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.656549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.656686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.656712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.656846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.086 [2024-07-25 23:39:04.656872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.086 qpair failed and we were unable to recover it. 00:33:07.086 [2024-07-25 23:39:04.657050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.657088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.657230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.657258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.657391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.657418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.657549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.657574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.657727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.657754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.657890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.657916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.658019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.658045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.658239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.658272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.658398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.658423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.658531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.658555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.658683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.658711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.658872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.658897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.659001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.659026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.659224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.659252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.659403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.659429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.659561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.659601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.659786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.659811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.659950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.659976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.660089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.660115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.660308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.660336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.660482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.660508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.660622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.660648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.660813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.660838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.660973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.660998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.661135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.661177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.661319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.661347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.661501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.661526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.661700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.661727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.661895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.661924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.662080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.662115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.662216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.662242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.662379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.662407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.662557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.662582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.662691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.662716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.662875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.662904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.663068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.087 [2024-07-25 23:39:04.663095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.087 qpair failed and we were unable to recover it. 00:33:07.087 [2024-07-25 23:39:04.663239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.663281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.663426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.663455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.663636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.663662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.663771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.663796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.663909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.663935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.664087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.664112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.664262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.664291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.664434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.664463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.664579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.664604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.664765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.664806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.664952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.664980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.665143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.665183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.665330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.665358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.665537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.665562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.665726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.665752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.665853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.665894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.666069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.666097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.666226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.666250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.666358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.666383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.666547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.666571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.666749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.666775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.666882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.666907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.667047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.667084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.667237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.667263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.667434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.667458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.667644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.667669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.667830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.667856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.667991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.668016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.668170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.668196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.668330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.668355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.668507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.668535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.668708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.668733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.088 qpair failed and we were unable to recover it. 00:33:07.088 [2024-07-25 23:39:04.668894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.088 [2024-07-25 23:39:04.668919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.669075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.669105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.669217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.669261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.669368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.669393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.669520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.669545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.669681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.669706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.669865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.669894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.670021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.670068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.670189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.670219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.670370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.670395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.670538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.670563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.670666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.670691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.670791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.670816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.670952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.670978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.671106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.671134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.671287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.671314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.671472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.671516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.671660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.671687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.671839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.671864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.672046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.672093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.672219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.672246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.672366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.672392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.672525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.672550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.672677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.672704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.672862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.672887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.673022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.673047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.673195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.673220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.673330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.673356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.673487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.673511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.673692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.673719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.673924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.673950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.674086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.674112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.674244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.674269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.674450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.674477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.674586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.674611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.674721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.674746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.674907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.674932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.675087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.675116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.675274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.675301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.089 [2024-07-25 23:39:04.675464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.089 [2024-07-25 23:39:04.675490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.089 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.675636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.675664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.675838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.675866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.676010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.676035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.676171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.676226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.676381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.676410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.676598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.676623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.676732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.676779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.676993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.677030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.677182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.677208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.677306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.677331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.677459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.677487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.677667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.677693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.677842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.677870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.678014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.678042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.678212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.678238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.678364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.678390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.678514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.678542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.678665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.678691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.678825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.678851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.679009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.679034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.679156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.679182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.679289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.679315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.679415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.679441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.679551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.679577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.679709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.679735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.679836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.679861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.679993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.680019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.680183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.680214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.680363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.680392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.680535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.680560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.680694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.680720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.680845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.680874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.681028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.681053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.681169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.681199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.681306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.681332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.681461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.681490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.681637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.681665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.681809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.681838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.681993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.090 [2024-07-25 23:39:04.682019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.090 qpair failed and we were unable to recover it. 00:33:07.090 [2024-07-25 23:39:04.682139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.682166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.682303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.682329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.682465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.682490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.682596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.682622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.682779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.682806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.682960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.682985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.683124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.683151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.683256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.683282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.683392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.683418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.683532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.683557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.683668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.683695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.683830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.683855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.684003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.684042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.684221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.684266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.684430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.684455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.684560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.684586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.684746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.684774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.684959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.684984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.685086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.685129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.685272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.685315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.685427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.685452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.685562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.685594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.685757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.685783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.685956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.685981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.686082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.686109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.686281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.686308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.686466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.686491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.686721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.686778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.686921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.686949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.687112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.687139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.687262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.687291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.687450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.687477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.687638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.687664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.687800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.687825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.687981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.688009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.688186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.688214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.688348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.688389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.688561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.688589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.688726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.688752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.091 qpair failed and we were unable to recover it. 00:33:07.091 [2024-07-25 23:39:04.688862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.091 [2024-07-25 23:39:04.688888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.689035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.689071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.689233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.689258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.689390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.689416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.689550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.689576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.689771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.689796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.689903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.689929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.690070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.690096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.690227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.690252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.690395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.690439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.690598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.690623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.690757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.690782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.690911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.690936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.691090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.691119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.691264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.691289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.691425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.691451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.691631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.691655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.691785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.691810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.691971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.692013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.692211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.692238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.692344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.692369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.692503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.692529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.692714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.692747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.692884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.692911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.693050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.693085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.693241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.693267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.693403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.693428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.693552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.693578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.693728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.693757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.693893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.693918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.694049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.694083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.694280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.694306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.694462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.694487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.694665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.694694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.694859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.694884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.092 qpair failed and we were unable to recover it. 00:33:07.092 [2024-07-25 23:39:04.695020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.092 [2024-07-25 23:39:04.695046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.695197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.695239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.695381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.695409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.695567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.695593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.695752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.695794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.695935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.695963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.696097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.696124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.696236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.696263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.696399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.696425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.696616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.696642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.696779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.696805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.696941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.696968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.697142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.697169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.697308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.697334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.697486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.697524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.697668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.697694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.697873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.697901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.698108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.698135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.698249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.698276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.698399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.698425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.698561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.698588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.698727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.698753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.698855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.698881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.699013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.699040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.699216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.093 [2024-07-25 23:39:04.699242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.093 qpair failed and we were unable to recover it. 00:33:07.093 [2024-07-25 23:39:04.699441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.699467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.699619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.699645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.699757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.699788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.699933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.699975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.700157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.700187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.700374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.700400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.700512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.700555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.700742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.700769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.700876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.700902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.701007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.701034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.701237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.701266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.701448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.701474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.701645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.701674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.701860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.701887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.701997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.702023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.702187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.702213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.702322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.702347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.702480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.702507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.702642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.702669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.702806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.702832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.702989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.703014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.703177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.703207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.703349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.703379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.703567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.703593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.703754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.703797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.703927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.703958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.704116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.704142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.704254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.704280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.704408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.704436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.094 qpair failed and we were unable to recover it. 00:33:07.094 [2024-07-25 23:39:04.704595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.094 [2024-07-25 23:39:04.704622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.704797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.704826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.704996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.705024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.705191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.705217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.705342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.705385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.705575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.705639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.705791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.705816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.705994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.706022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.706150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.706180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.706342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.706368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.706498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.706524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.706682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.706707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.706909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.706934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.707044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.707081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.707216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.707242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.707372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.707398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.707557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.707600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.707743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.707786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.707924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.707949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.708124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.708149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.708334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.708362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.708512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.708538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.708647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.708687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.708831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.708860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.708989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.709014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.709183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.709209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.709354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.709383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.709515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.709540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.709648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.095 [2024-07-25 23:39:04.709673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.095 qpair failed and we were unable to recover it. 00:33:07.095 [2024-07-25 23:39:04.709804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.709838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.709972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.709997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.710163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.710192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.710382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.710411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.710536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.710560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.710671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.710696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.710803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.710828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.710963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.710988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.711120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.711164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.711325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.711351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.711484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.711509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.711627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.711653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.711809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.711838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.711959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.711985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.712118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.712144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.712332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.712362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.712482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.712506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.712665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.712691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.712820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.712848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.712972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.712997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.713134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.713160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.713292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.713320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.713456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.713482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.713586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.713613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.713765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.713810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.713978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.714005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.714139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.714165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.714274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.714300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.714436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.096 [2024-07-25 23:39:04.714463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.096 qpair failed and we were unable to recover it. 00:33:07.096 [2024-07-25 23:39:04.714645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.714673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.714817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.714847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.714983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.715010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.715116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.715142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.715276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.715302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.715431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.715456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.715560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.715585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.715716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.715742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.715883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.715908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.716075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.716112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.716253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.716282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.716403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.716428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.716589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.716634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.716791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.716817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.716922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.716947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.717180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.717209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.717392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.717441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.717570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.717595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.717729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.717756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.717894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.717923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.718091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.718118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.718223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.718248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.718413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.718443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.718578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.718603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.718702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.718727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.718886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.718912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.719075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.719101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.719209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.719234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.097 [2024-07-25 23:39:04.719392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.097 [2024-07-25 23:39:04.719421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.097 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.719575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.719600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.719726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.719769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.719931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.719959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.720092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.720126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.720305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.720335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.720497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.720545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.720665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.720696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.720807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.720833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.720990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.721021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.721161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.721186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.721318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.721343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.721464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.721493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.721614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.721639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.721741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.721765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.721946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.721974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.722126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.722152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.722287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.722330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.722502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.722530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.722660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.722685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.722819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.722845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.723013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.723039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.723179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.723204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.723309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.723334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.723475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.098 [2024-07-25 23:39:04.723500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.098 qpair failed and we were unable to recover it. 00:33:07.098 [2024-07-25 23:39:04.723628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.723653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.723782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.723809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.723939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.723964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.724107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.724133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.724239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.724265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.724436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.724464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.724613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.724638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.724824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.724852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.724961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.724989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.725116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.725141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.725275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.725301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.725494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.725560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.725744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.725769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.725916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.725945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.726119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.726146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.726249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.726274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.726414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.726440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.726597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.726625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.726775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.726800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.726935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.726977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.727119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.727148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.727297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.727322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.727427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.727457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.727592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.727621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.727745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.727771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.727889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.727914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.728088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.728116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.728275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.728300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.728409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.728435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.728591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.099 [2024-07-25 23:39:04.728620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.099 qpair failed and we were unable to recover it. 00:33:07.099 [2024-07-25 23:39:04.728775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.728800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.728906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.728931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.729126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.729154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.729312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.729337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.729445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.729470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.729609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.729634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.729808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.729833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.729963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.729988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.730123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.730151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.730340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.730366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.730475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.730501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.730633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.730658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.730793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.730819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.730944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.730970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.731080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.731105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.731231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.731256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.731367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.731392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.731520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.731546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.731677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.731702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.731806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.731835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.731935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.731960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.732071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.732097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.732204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.732229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.732387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.732412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.732553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.732579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.732708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.732733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.732916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.732955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.733100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.733128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.733313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.733343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.733561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.733622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.733808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.100 [2024-07-25 23:39:04.733835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.100 qpair failed and we were unable to recover it. 00:33:07.100 [2024-07-25 23:39:04.733991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.734022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.734215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.734243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.734356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.734382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.734516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.734543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.734673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.734701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.734849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.734874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.734979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.735004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.735195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.735226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.735364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.735390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.735522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.735548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.735723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.735749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.735855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.735881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.736007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.736033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.736152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.736179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.736296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.736323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.736436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.736461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.736595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.736620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.736745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.736771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.736908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.736934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.737074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.737100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.737238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.737263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.737400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.737426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.737556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.737581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.737714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.737741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.737897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.737925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.738151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.738179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.738315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.738341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.738478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.738502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.738635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.738664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.101 [2024-07-25 23:39:04.738772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.101 [2024-07-25 23:39:04.738797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.101 qpair failed and we were unable to recover it. 00:33:07.102 [2024-07-25 23:39:04.738927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.102 [2024-07-25 23:39:04.738953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.102 qpair failed and we were unable to recover it. 00:33:07.102 [2024-07-25 23:39:04.739094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.102 [2024-07-25 23:39:04.739121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.102 qpair failed and we were unable to recover it. 00:33:07.102 [2024-07-25 23:39:04.739268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.102 [2024-07-25 23:39:04.739295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.102 qpair failed and we were unable to recover it. 00:33:07.102 [2024-07-25 23:39:04.739395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.102 [2024-07-25 23:39:04.739421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.102 qpair failed and we were unable to recover it. 00:33:07.102 [2024-07-25 23:39:04.739535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.102 [2024-07-25 23:39:04.739561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.102 qpair failed and we were unable to recover it. 00:33:07.383 [2024-07-25 23:39:04.739724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.383 [2024-07-25 23:39:04.739750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.383 qpair failed and we were unable to recover it. 00:33:07.383 [2024-07-25 23:39:04.739881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.383 [2024-07-25 23:39:04.739907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.383 qpair failed and we were unable to recover it. 00:33:07.383 [2024-07-25 23:39:04.740013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.383 [2024-07-25 23:39:04.740038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.383 qpair failed and we were unable to recover it. 00:33:07.383 [2024-07-25 23:39:04.740191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.740231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.740362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.740400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.740513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.740539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.740705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.740759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.740954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.741006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.741148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.741176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.741287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.741313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.741445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.741470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.741619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.741672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.741894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.741944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.742091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.742118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.742227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.742253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.742379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.742408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.742581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.742608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.742720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.742748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.742888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.742918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.743040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.743072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.743187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.743213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.743311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.743336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.743464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.743493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.743663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.743691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.743804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.743836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.743997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.744026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.744144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.744171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.744273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.744300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.744465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.744510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.744728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.744773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.744930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.744956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.745074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.745101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.745255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.745299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.745448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.745497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.745630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.745658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.745830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.745869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.745980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.746007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.746171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.746201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.384 [2024-07-25 23:39:04.746319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.384 [2024-07-25 23:39:04.746348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.384 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.746483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.746511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.746633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.746675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.746870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.746898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.747039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.747074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.747214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.747243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.747429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.747457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.747569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.747597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.747743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.747771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.747931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.747960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.748111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.748138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.748293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.748322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.748491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.748534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.748802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.748855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.748986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.749012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.749167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.749212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.749325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.749369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.749492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.749522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.749644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.749671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.749804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.749830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.749971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.749997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.750148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.750180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.750302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.750352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.750497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.750540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.750762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.750823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.750944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.750971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.751110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.751137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.751258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.751286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.751402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.751431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.751590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.751617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.751791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.751836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.751947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.751973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.752145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.752190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.752345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.752373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.752528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.752554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.752684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.752709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.752857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.385 [2024-07-25 23:39:04.752883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.385 qpair failed and we were unable to recover it. 00:33:07.385 [2024-07-25 23:39:04.753032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.753080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.753220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.753246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.753416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.753441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.753600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.753626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.753787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.753812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.753918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.753945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.754078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.754122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.754227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.754253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.754404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.754432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.754669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.754728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.754878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.754906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.755118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.755144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.755296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.755334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.755522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.755570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.755862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.755915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.756026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.756053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.756229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.756255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.756387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.756416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.756625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.756667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.756826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.756870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.757008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.757034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.757201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.757245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.757378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.757421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.757585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.757627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.757839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.757866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.757980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.758010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.758161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.758187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.758311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.758352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.758495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.758554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.758823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.758875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.758993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.759022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.759163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.759189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.759345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.759373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.759543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.386 [2024-07-25 23:39:04.759571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.386 qpair failed and we were unable to recover it. 00:33:07.386 [2024-07-25 23:39:04.759750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.759801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.760032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.760070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.760223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.760248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.760408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.760438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.760702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.760757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.760889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.760919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.761070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.761113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.761223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.761248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.761385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.761411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.761539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.761567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.761707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.761735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.761906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.761934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.762078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.762104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.762237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.762262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.762437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.762465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.762614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.762656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.762868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.762896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.763066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.763092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.763233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.763259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.763386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.763414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.763554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.763582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.763760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.763788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.763901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.763929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.764052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.764123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.764269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.764297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.764522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.764550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.764702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.764730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.764886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.764915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.765071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.765106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.765238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.765263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.765378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.765420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.765536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.765564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.765691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.765720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.765873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.765902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.766050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.766085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.766200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.766226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.766406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.387 [2024-07-25 23:39:04.766435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.387 qpair failed and we were unable to recover it. 00:33:07.387 [2024-07-25 23:39:04.766608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.766636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.766781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.766809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.766981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.767009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.767182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.767208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.767315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.767340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.767473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.767501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.767668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.767696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.767838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.767866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.768041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.768080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.768216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.768242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.768349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.768375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.768512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.768537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.768661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.768689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.768830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.768859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.769004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.769032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.769187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.769213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.769359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.769387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.769531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.769559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.769736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.769779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.769935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.769963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.770116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.770142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.770272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.770297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.770427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.770456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.770565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.770590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.770719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.770745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.770889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.770913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.771040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.771074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.771193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.771219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.771325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.771351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.388 [2024-07-25 23:39:04.771495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.388 [2024-07-25 23:39:04.771523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.388 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.771685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.771710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.771823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.771848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.771981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.772006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.772150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.772176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.772314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.772339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.772481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.772522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.772626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.772651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.772791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.772816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.772972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.773000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.773181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.773207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.773334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.773375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.773524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.773550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.773682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.773708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.773819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.773858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.774002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.774029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.774183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.774209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.774347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.774374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.774506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.774532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.774671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.774697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.774804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.774834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.774956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.774997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.775113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.775140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.775289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.775330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.775483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.775511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.775663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.775688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.775822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.775868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.776010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.776038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.776180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.776206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.776320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.776346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.776477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.776502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.776632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.776658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.776762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.776789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.776919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.776945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.777068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.777094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.777191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.777216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.777328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.777372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.389 [2024-07-25 23:39:04.777524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.389 [2024-07-25 23:39:04.777550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.389 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.777650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.777676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.777860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.777888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.778017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.778042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.778196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.778234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.778368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.778398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.778585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.778610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.778757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.778785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.778941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.778966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.779094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.779120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.779250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.779297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.779453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.779479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.779584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.779609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.779744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.779770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.779930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.779958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.780114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.780141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.780273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.780298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.780449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.780477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.780626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.780652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.780753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.780778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.780906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.780931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.781038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.781070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.781208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.781233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.781356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.781384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.781542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.781567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.781668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.781694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.781818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.781847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.781981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.782006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.782169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.782208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.782322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.782349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.782455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.782481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.782640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.782681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.782827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.782855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.783012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.783038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.783155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.783181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.390 qpair failed and we were unable to recover it. 00:33:07.390 [2024-07-25 23:39:04.783321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.390 [2024-07-25 23:39:04.783363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.783515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.783541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.783727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.783798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.783922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.783952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.784108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.784134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.784234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.784259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.784449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.784477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.784609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.784635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.784769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.784795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.784954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.784982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.785133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.785158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.785316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.785361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.785544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.785572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.785694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.785719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.785849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.785874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.785997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.786026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.786200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.786226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.786331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.786357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.786499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.786528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.786685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.786710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.786848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.786874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.787039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.787074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.787305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.787331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.787483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.787512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.787650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.787679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.787834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.787859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.788050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.788125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.788243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.788269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.788411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.788438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.788545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.788576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.788735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.788763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.788921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.788946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.789122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.789150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.789310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.789336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.789442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.789467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.391 [2024-07-25 23:39:04.789580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.391 [2024-07-25 23:39:04.789605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.391 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.789738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.789762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.789872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.789897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.789997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.790022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.790159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.790185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.790298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.790325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.790464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.790489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.790625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.790650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.790841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.790870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.790978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.791006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.791160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.791186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.791348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.791373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.791481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.791523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.791634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.791663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.791805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.791833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.792004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.792032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.792190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.792216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.792346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.792372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.792502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.792546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.792652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.792680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.792861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.792886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.792985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.793026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.793194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.793220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.793356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.793381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.793512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.793537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.793692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.793719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.793842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.793866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.793998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.794023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.794172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.794197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.794332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.794358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.794468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.794493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.794682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.794707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.794849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.794874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.794973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.794998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.795131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.795157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.795296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.392 [2024-07-25 23:39:04.795322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.392 qpair failed and we were unable to recover it. 00:33:07.392 [2024-07-25 23:39:04.795429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.795454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.795561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.795586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.795712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.795738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.795867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.795892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.795999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.796024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.796196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.796222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.796324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.796369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.796497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.796525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.796682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.796707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.796839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.796881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.797041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.797072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.797235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.797260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.797374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.797417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.797546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.797575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.797756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.797781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.797921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.797949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.798121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.798146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.798281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.798307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.798436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.798478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.798648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.798677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.798792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.798818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.798955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.798980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.799177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.799206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.799347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.799372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.799503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.799528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.799718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.799746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.799869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.799898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.800057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.800114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.800247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.800273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.800405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.800430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.800564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.800606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.393 [2024-07-25 23:39:04.800747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.393 [2024-07-25 23:39:04.800775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.393 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.800955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.800981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.801170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.801198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.801352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.801378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.801489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.801515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.801626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.801652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.801761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.801786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.801924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.801949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.802051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.802099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.802254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.802282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.802406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.802431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.802535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.802561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.802692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.802720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.802841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.802866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.803002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.803028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.803156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.803185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.803366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.803391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.803537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.803565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.803677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.803705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.803841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.803867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.803968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.803993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.804152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.804178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.804321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.804346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.804534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.804562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.804709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.804737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.804856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.804881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.805009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.805034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.805201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.805230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.805401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.805427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.805558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.805598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.805743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.805772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.805897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.805923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.806030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.806055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.806207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.806235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.806412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.806437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.394 qpair failed and we were unable to recover it. 00:33:07.394 [2024-07-25 23:39:04.806615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.394 [2024-07-25 23:39:04.806643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.806762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.806790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.806964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.806989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.807095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.807121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.807248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.807276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.807423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.807448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.807584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.807609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.807718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.807743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.807908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.807933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.808036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.808102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.808210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.808238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.808363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.808388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.808513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.808539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.808654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.808682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.808843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.808868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.809008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.809049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.809205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.809233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.809361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.809386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.809498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.809524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.809651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.809676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.809805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.809831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.809941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.809982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.810127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.810156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.810285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.810310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.810469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.810495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.810625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.810654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.810834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.810859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.811034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.811069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.811218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.811252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.811387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.811412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.811569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.811610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.811725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.811767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.811900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.811925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.812025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.812051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.812230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.812255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.812384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.812410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.812511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.395 [2024-07-25 23:39:04.812536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.395 qpair failed and we were unable to recover it. 00:33:07.395 [2024-07-25 23:39:04.812690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.812715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.812817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.812842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.812949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.812974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.813099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.813128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.813248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.813274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.813408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.813434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.813588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.813613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.813748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.813774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.813877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.813902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.814072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.814101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.814252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.814277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.814409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.814451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.814566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.814594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.814748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.814773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.814879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.814905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.815039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.815072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.815203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.815229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.815374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.815402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.815550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.815578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.815733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.815758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.815896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.815920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.816078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.816104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.816233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.816258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.816437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.816465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.816637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.816665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.816819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.816845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.816958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.816984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.817150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.817176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.817348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.817374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.817500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.817541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.817718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.817743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.817902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.817927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.818024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.818053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.818163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.818188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.818323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.396 [2024-07-25 23:39:04.818348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.396 qpair failed and we were unable to recover it. 00:33:07.396 [2024-07-25 23:39:04.818453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.818479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.818613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.818641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.818818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.818843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.818972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.819015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.819143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.819172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.819296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.819322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.819439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.819464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.819591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.819616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.819776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.819802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.819936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.819961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.820072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.820097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.820247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.820272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.820450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.820478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.820622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.820650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.820794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.820819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.820945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.820971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.821136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.821165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.821324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.821350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.821453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.821478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.821675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.821703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.821829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.821854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.821986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.822012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.822207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.822236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.822397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.822423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.822536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.822566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.822679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.822705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.822813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.822838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.822969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.822994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.823158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.823187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.823313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.823338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.823464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.823490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.823623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.823651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.823804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.823829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.823957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.397 [2024-07-25 23:39:04.823982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.397 qpair failed and we were unable to recover it. 00:33:07.397 [2024-07-25 23:39:04.824144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.824173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.824329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.824355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.824483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.824509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.824652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.824680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.824872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.824897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.825032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.825057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.825240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.825268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.825389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.825414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.825541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.825567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.825722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.825747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.825879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.825904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.826038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.826091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.826239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.826268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.826395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.826421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.826554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.826579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.826705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.826733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.826856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.826882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.826989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.827014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.827209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.827235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.827366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.827391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.827492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.827517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.827646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.827673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.827802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.827827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.827933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.827959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.828120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.828150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.828284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.828310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.828470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.828514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.828624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.828651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.828795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.828821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.828953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.828978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.829151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.398 [2024-07-25 23:39:04.829180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.398 qpair failed and we were unable to recover it. 00:33:07.398 [2024-07-25 23:39:04.829304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.829333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.829472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.829497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.829626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.829652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.829844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.829869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.829998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.830024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.830155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.830181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.830279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.830304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.830409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.830434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.830583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.830611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.830737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.830763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.830929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.830972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.831158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.831185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.831286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.831311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.831441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.831467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.831619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.831648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.831835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.831860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.832013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.832042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.832195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.832223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.832375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.832401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.832539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.832565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.832700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.832727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.832899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.832925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.833073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.833102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.833271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.833299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.833424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.833449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.833588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.833613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.833768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.833796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.833949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.833979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.834089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.834115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.834218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.834243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.834354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.834379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.834477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.834502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.834667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.834695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.399 qpair failed and we were unable to recover it. 00:33:07.399 [2024-07-25 23:39:04.834818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.399 [2024-07-25 23:39:04.834843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.834976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.835003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.835142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.835171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.835351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.835377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.835489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.835514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.835652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.835677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.835803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.835828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.835960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.836002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.836185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.836211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.836342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.836367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.836497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.836523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.836658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.836683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.836844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.836869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.836981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.837006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.837160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.837186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.837324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.837349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.837480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.837505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.837665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.837694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.837840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.837865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.837982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.838008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.838143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.838169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.838308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.838333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.838469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.838513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.838661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.838689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.838837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.838862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.838995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.839036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.839160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.839188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.839341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.839366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.839542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.839571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.839740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.839768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.839941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.839966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.840081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.840108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.840267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.840293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.840428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.840453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.840557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.400 [2024-07-25 23:39:04.840583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.400 qpair failed and we were unable to recover it. 00:33:07.400 [2024-07-25 23:39:04.840712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.840741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.840847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.840872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.840980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.841005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.841181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.841210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.841342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.841368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.841494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.841519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.841671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.841700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.841883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.841908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.842017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.842069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.842213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.842241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.842376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.842402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.842506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.842531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.842655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.842683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.842800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.842826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.842991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.843017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.843187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.843216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.843395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.843421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.843545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.843588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.843728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.843756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.843896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.843921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.844030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.844055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.844256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.844284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.844405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.844430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.844567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.844593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.844712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.844741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.844894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.844920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.845051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.845086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.845236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.845269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.845453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.845479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.845617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.845643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.845777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.845802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.401 [2024-07-25 23:39:04.845961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.401 [2024-07-25 23:39:04.845987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.401 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.846095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.846121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.846251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.846276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.846437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.846462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.846560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.846602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.846711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.846739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.846920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.846945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.847049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.847099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.847245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.847270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.847407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.847432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.847538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.847564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.847729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.847754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.847878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.847903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.848012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.848037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.848230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.848258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.848421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.848447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.848584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.848626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.848783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.848809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.848921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.848946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.849086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.849112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.849231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.849259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.849413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.849438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.849611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.849639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.849752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.849780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.849933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.849959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.850053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.850087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.850240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.850269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.850424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.850449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.850581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.850625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.850765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.850793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.850973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.850998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.851173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.851202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.851323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.851353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.851536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.851561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.851716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.851744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.851888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.851917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.852069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.852095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.402 qpair failed and we were unable to recover it. 00:33:07.402 [2024-07-25 23:39:04.852235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.402 [2024-07-25 23:39:04.852264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.852399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.852424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.852580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.852605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.852714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.852739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.852871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.852897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.853000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.853025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.853167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.853193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.853322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.853347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.853522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.853547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.853678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.853721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.853900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.853928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.854057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.854090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.854245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.854271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.854432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.854457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.854589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.854614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.854724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.854749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.854924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.854950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.855057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.855090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.855191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.855216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.855398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.855426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.855573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.855598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.855732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.855773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.855895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.855923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.856073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.856099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.856235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.856260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.856358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.856383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.856523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.856548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.856655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.856696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.856863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.856889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.857055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.857098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.857255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.403 [2024-07-25 23:39:04.857283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.403 qpair failed and we were unable to recover it. 00:33:07.403 [2024-07-25 23:39:04.857424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.857451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.857601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.857626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.857751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.857776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.857913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.857957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.858095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.858121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.858253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.858278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.858436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.858464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.858649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.858675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.858812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.858837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.858936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.858962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.859116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.859154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.859290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.859317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.859442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.859486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.859621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.859664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.859817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.859862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.859995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.860021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.860166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.860193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.860300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.860326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.860462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.860488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.860613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.860639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.860744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.860770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.860909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.860935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.861114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.861144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.861272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.861297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.861466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.861495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.861615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.861643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.861807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.861835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.862016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.862044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.862202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.862231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.862352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.862380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.862492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.862520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.862686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.862713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.862846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.862872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.863003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.863028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.863172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.863199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.863375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.863401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.863523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.404 [2024-07-25 23:39:04.863552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.404 qpair failed and we were unable to recover it. 00:33:07.404 [2024-07-25 23:39:04.863697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.863726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.863866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.863891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.864000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.864026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.864156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.864182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.864320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.864362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.864472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.864500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.864801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.864856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.865030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.865066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.865242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.865267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.865403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.865432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.865580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.865608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.865778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.865806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.865953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.865992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.866140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.866167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.866307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.866351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.866477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.866506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.866705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.866734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.866859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.866885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.867019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.867046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.867214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.867258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.867414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.867457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.867720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.867764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.867897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.867923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.868064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.868091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.868229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.868256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.868424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.868450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.868576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.868604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.868778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.868866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.869030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.869056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.869227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.869270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.869400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.869461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.869620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.869665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.869768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.869793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.869903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.869929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.870089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.405 [2024-07-25 23:39:04.870116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.405 qpair failed and we were unable to recover it. 00:33:07.405 [2024-07-25 23:39:04.870248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.870273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.870406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.870431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.870571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.870598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.870702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.870728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.870855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.870881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.871013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.871043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.871200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.871246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.871427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.871456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.871617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.871643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.871777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.871803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.871939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.871965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.872071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.872098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.872232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.872258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.872381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.872427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.872556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.872600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.872723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.872766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.872907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.872933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.873082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.873110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.873260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.873305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.873502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.873545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.873700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.873743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.873877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.873904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.874067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.874094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.874247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.874290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.874420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.874449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.874622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.874667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.874772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.874799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.874914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.874939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.875078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.875105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.875261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.875286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.875420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.875445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.875576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.875604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.875749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.875782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.875899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.875927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.876084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.406 [2024-07-25 23:39:04.876110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.406 qpair failed and we were unable to recover it. 00:33:07.406 [2024-07-25 23:39:04.876215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.876240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.876367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.876395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.876521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.876565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.876706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.876734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.876882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.876910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.877054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.877107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.877248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.877275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.877459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.877503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.877776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.877831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.878001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.878027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.878166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.878212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.878357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.878401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.878559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.878603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.878712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.878739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.878854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.878880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.879010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.879036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.879197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.879241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.879374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.879401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.879538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.879564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.879700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.879726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.879828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.879855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.879973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.879999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.880100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.880126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.880262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.880287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.880426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.880452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.880555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.880581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.880719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.880745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.880882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.880907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.881045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.881099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.881282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.881311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.881451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.881496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.881633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.881659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.881773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.881801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.881966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.881991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.882143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.882172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.882345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.407 [2024-07-25 23:39:04.882373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.407 qpair failed and we were unable to recover it. 00:33:07.407 [2024-07-25 23:39:04.882494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.882521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.882662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.882690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.882820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.882845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.882979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.883005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.883134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.883160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.883285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.883313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.883456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.883484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.883598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.883626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.883764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.883792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.883929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.883957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.884104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.884148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.884288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.884315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.884458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.884502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.884661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.884705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.884825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.884872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.885038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.885076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.885218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.885243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.885353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.885378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.885500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.885530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.885729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.885774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.885885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.885912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.886040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.886074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.408 [2024-07-25 23:39:04.886184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.408 [2024-07-25 23:39:04.886209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.408 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.886364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.886392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.886560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.886588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.886704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.886732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.886913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.886958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.887119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.887145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.887302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.887345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.887607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.887658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.887822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.887865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.888010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.888039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.888256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.888285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.888405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.888434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.888577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.888605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.888761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.888789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.888960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.888988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.889122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.889147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.889266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.889294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.889441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.889470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.889637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.889665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.889839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.889882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.890045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.890080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.890188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.890214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.890365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.890411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.890561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.890589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.890725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.890767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.890899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.890925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.891044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.891077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.891259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.891304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.891605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.891656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.891818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.891845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.891962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.891988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.892151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.892178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.892306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.892334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.892477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.892510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.892635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.892663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.892778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.892806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.409 qpair failed and we were unable to recover it. 00:33:07.409 [2024-07-25 23:39:04.892946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.409 [2024-07-25 23:39:04.892974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.893155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.893182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.893313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.893357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.893504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.893546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.893680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.893724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.893853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.893879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.893989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.894015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.894139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.894169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.894313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.894341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.894494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.894521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.894649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.894674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.894839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.894864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.894994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.895019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.895133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.895160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.895312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.895354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.895481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.895523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.895675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.895718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.895856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.895881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.896041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.896074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.896179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.896205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.896389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.896433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.896590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.896634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.896858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.896910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.897045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.897082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.897212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.897241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.897393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.897421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.897570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.897597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.897711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.897739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.897855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.897883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.898017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.898045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.898172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.898198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.898336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.898361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.898519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.898547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.898697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.898724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.410 [2024-07-25 23:39:04.898872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.410 [2024-07-25 23:39:04.898900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.410 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.899044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.899081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.899204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.899229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.899334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.899360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.899475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.899500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.899658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.899687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.899858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.899886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.900024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.900052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.900216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.900242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.900387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.900412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.900569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.900597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.900741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.900770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.900922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.900950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.901136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.901161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.901263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.901288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.901424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.901449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.901577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.901620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.901791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.901820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.902077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.902104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.902265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.902291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.902465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.902492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.902689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.902717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.902828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.902856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.902981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.903009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.903186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.903212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.903355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.903381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.903542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.903571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.903714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.903742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.903880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.903908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.904075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.904119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.904273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.904299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.904444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.904477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.904600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.904629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.904773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.904801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.904952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.904994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.905111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.411 [2024-07-25 23:39:04.905137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.411 qpair failed and we were unable to recover it. 00:33:07.411 [2024-07-25 23:39:04.905270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.905295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.905423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.905464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.905606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.905631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.905790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.905815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.905994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.906022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.906208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.906236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.906385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.906410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.906548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.906573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.906732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.906774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.906927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.906952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.907093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.907134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.907254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.907282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.907457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.907482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.907612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.907657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.907801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.907828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.907978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.908003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.908113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.908139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.908243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.908268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.908399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.908424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.908524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.908548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.908684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.908710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.908837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.908863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.909021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.909050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.909216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.909242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.909410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.909435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.909614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.909642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.909795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.909822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.909941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.909967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.910077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.910103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.910235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.910263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.910398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.910422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.910585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.910625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.910775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.910804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.910934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.910959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.911066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.911091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.412 [2024-07-25 23:39:04.911254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.412 [2024-07-25 23:39:04.911282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.412 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.911438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.911463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.911595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.911620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.911740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.911769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.911892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.911917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.912041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.912073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.912262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.912290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.912412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.912437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.912570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.912595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.912737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.912765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.912915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.912941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.913080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.913123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.913273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.913302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.913455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.913480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.913577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.913602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.913777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.913802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.913907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.913932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.914034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.914068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.914169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.914195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.914295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.914320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.914478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.914504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.914684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.914709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.914842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.914867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.914974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.915003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.915178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.915207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.915334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.915359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.915495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.915520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.915698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.915726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.915877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.915909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.916067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.916096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.413 [2024-07-25 23:39:04.916249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.413 [2024-07-25 23:39:04.916276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.413 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.916397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.916422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.916523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.916548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.916731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.916758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.916907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.916932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.917106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.917135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.917319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.917344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.917442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.917467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.917596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.917621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.917749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.917790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.917958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.917983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.918129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.918158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.918314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.918339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.918463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.918489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.918596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.918621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.918721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.918746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.918877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.918902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.919038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.919089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.919201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.919230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.919366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.919391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.919488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.919513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.919656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.919685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.919843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.919868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.920001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.920026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.920163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.920189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.920295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.920324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.920424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.920449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.920577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.920602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.920702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.920727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.920830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.920855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.921047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.921091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.921194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.921219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.921351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.921376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.921527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.921555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.921704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.921729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.921858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.414 [2024-07-25 23:39:04.921882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.414 qpair failed and we were unable to recover it. 00:33:07.414 [2024-07-25 23:39:04.922004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.922032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.922197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.922222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.922330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.922354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.922462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.922487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.922649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.922673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.922789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.922818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.922973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.922999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.923136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.923161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.923325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.923351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.923543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.923568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.923701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.923726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.923858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.923883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.923994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.924018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.924159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.924185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.924363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.924390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.924566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.924594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.924728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.924753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.924890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.924915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.925075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.925100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.925237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.925263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.925388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.925428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.925605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.925633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.925782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.925807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.925980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.926008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.926119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.926146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.926330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.926355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.926491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.926516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.926647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.926671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.926779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.926805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.926911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.926936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.927070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.927103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.927237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.927262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.927428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.927453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.927614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.927642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.927818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.927843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.927947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.927973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.928129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.415 [2024-07-25 23:39:04.928154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.415 qpair failed and we were unable to recover it. 00:33:07.415 [2024-07-25 23:39:04.928261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.928285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.928419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.928444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.928599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.928626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.928804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.928829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.928938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.928962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.929082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.929107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.929223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.929248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.929388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.929413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.929593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.929621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.929776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.929801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.929934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.929974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.930089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.930117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.930297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.930322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.930434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.930459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.930595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.930619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.930779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.930804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.930907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.930951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.931129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.931155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.931282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.931306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.931414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.931439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.931576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.931604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.931772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.931797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.931981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.932009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.932140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.932168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.932299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.932324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.932433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.932458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.932558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.932599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.932749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.932774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.932901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.932942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.933095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.933122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.933252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.933278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.933393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.933417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.933527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.933552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.933708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.933732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.933845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.933887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.934030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.934057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.416 qpair failed and we were unable to recover it. 00:33:07.416 [2024-07-25 23:39:04.934219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.416 [2024-07-25 23:39:04.934244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.934355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.934381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.934553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.934578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.934721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.934746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.934855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.934880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.935013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.935037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.935180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.935205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.935308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.935333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.935494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.935522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.935678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.935703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.935835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.935875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.936023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.936071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.936207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.936232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.936408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.936435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.936578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.936606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.936735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.936760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.936899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.936924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.937111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.937140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.937270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.937295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.937425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.937450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.937632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.937660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.937778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.937803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.937909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.937934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.938073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.938102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.938261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.938287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.938424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.938454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.938589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.938614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.938738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.938763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.938891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.938932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.939111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.939140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.939274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.939299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.939402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.939427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.939525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.939550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.939659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.939684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.939840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.939865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.939988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.940013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.940175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.940214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.417 qpair failed and we were unable to recover it. 00:33:07.417 [2024-07-25 23:39:04.940382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.417 [2024-07-25 23:39:04.940409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.940554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.940581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.940749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.940775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.940916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.940943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.941056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.941091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.941251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.941277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.941378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.941403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.941513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.941539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.941676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.941704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.941842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.941868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.941969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.941995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.942128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.942154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.942291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.942316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.942449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.942475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.942601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.942630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.942760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.942806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.942977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.943005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.943152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.943178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.943307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.943332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.943487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.943515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.943662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.943690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.943862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.943890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.944000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.944028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.944167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.944193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.944306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.944332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.944444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.944472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.944643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.944671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.944818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.944846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.944967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.944995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.945147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.418 [2024-07-25 23:39:04.945174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.418 qpair failed and we were unable to recover it. 00:33:07.418 [2024-07-25 23:39:04.945298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.945323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.945460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.945489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.945673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.945702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.945848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.945876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.945995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.946023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.946183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.946209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.946356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.946384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.946518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.946547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.946718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.946747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.946884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.946913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.947030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.947066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.947211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.947237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.947338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.947363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.947528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.947556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.947764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.947792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.947964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.947992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.948148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.948174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.948274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.948299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.948429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.948455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.948614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.948643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.948841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.948869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.948996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.949020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.949160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.949185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.949321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.949346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.949450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.949475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.949601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.949629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.949800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.949827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.949960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.949985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.950092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.950117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.950248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.950272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.950406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.950432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.950580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.950608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.950732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.950772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.950939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.950967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.419 qpair failed and we were unable to recover it. 00:33:07.419 [2024-07-25 23:39:04.951078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.419 [2024-07-25 23:39:04.951120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.951224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.951249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.951374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.951402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.951546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.951573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.951737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.951766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.951954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.951980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.952111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.952136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.952244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.952286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.952407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.952435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.952579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.952606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.952749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.952777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.952909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.952934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.953086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.953112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.953240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.953265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.953394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.953434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.953548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.953575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.953722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.953751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.953898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.953926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.954090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.954116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.954217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.954246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.954354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.954379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.954507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.954535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.954707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.954734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.954880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.954908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.955043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.955079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.955223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.955248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.955370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.955412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.955520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.955547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.955719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.955747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.955884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.955912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.956049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.956081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.956216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.956241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.956389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.956417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.956580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.956607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.956735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.420 [2024-07-25 23:39:04.956777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.420 qpair failed and we were unable to recover it. 00:33:07.420 [2024-07-25 23:39:04.956916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.956943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.957087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.957127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.957256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.957281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.957415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.957458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.957576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.957603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.957751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.957776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.957913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.957937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.958044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.958077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.958213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.958238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.958365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.958407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.958563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.958588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.958723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.958748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.958928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.958955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.959078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.959106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.959234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.959259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.959394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.959419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.959557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.959582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.959709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.959733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.959847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.959872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.960002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.960027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.960188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.960214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.960384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.960412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.960529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.960568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.960667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.960691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.960847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.960887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.961036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.961081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.961245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.961270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.961372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.961413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.961526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.961553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.961706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.961731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.961860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.961884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.962013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.962040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.962178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.962204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.962327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.962352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.962468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.421 [2024-07-25 23:39:04.962495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.421 qpair failed and we were unable to recover it. 00:33:07.421 [2024-07-25 23:39:04.962648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.962673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.962806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.962831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.962960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.962986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.963116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.963141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.963276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.963317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.963442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.963472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.963623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.963648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.963779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.963803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.963939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.963965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.964075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.964101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.964232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.964257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.964440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.964468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.964619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.964644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.964751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.964776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.964880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.964905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.965078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.965104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.965259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.965288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.965472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.965500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.965608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.965633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.965768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.965793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.965975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.966003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.966156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.966181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.966285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.966311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.966442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.966467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.966623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.966648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.966794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.966822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.966967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.966994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.967173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.967198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.967298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.967340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.967514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.967541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.967672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.967696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.967806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.967831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.967985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.968012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.968151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.968177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.968303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.968327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.968477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.968505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.968659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.422 [2024-07-25 23:39:04.968684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.422 qpair failed and we were unable to recover it. 00:33:07.422 [2024-07-25 23:39:04.968819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.968843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.969008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.969050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.969220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.969245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.969377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.969418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.969583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.969611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.969738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.969763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.969920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.969944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.970090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.970116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.970227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.970253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.970383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.970408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.970562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.970588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.970692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.970717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.970816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.970841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.970994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.971021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.971149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.971184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.971320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.971346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.971531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.971558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.971706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.971731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.971843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.971869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.972004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.972029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.972175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.972200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.972359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.972406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.972547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.972576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.972725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.972751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.972881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.972905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.973089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.973118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.973268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.973293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.973398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.973423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.973536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.973564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.973692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.973717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.973829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.973856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.423 qpair failed and we were unable to recover it. 00:33:07.423 [2024-07-25 23:39:04.974001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.423 [2024-07-25 23:39:04.974025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.974164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.974189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.974300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.974326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.974439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.974464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.974569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.974595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.974731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.974756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.974910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.974938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.975071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.975097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.975253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.975294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.975467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.975492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.975652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.975677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.975834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.975862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.975985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.976012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.976180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.976206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.976342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.976366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.976496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.976520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.976652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.976677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.976779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.976807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.976946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.976974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.977109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.977134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.977265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.977290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.977444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.977471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.977660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.977685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.977800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.977824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.977931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.977956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.978068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.978094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.978265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.978293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.978465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.978492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.978627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.978653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.978795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.978820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.978969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.978993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.979104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.979131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.979269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.979294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.979433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.979474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.979622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.979647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.979782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.424 [2024-07-25 23:39:04.979807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.424 qpair failed and we were unable to recover it. 00:33:07.424 [2024-07-25 23:39:04.979918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.979943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.980047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.980079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.980233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.980258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.980385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.980412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.980542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.980567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.980701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.980727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.980861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.980886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.981020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.981045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.981218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.981244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.981402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.981430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.981609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.981634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.981741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.981767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.981901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.981926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.982036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.982071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.982207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.982231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.982357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.982385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.982539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.982564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.982695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.982720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.982854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.982879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.982987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.983011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.983142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.983168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.983276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.983301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.983405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.983435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.983583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.983626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.983743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.983771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.983916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.983941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.984077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.984118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.984278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.984302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.984435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.984461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.984594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.984619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.984720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.984744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.984870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.984895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.985027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.985051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.985206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.985232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.985408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.985432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.985571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.985595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.425 [2024-07-25 23:39:04.985710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.425 [2024-07-25 23:39:04.985735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.425 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.985923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.985949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.986088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.986113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.986227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.986252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.986409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.986434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.986577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.986604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.986743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.986771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.986926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.986950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.987081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.987123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.987239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.987267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.987401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.987426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.987555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.987579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.987737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.987766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.987904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.987932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.988041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.988072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.988246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.988274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.988430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.988454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.988592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.988617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.988789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.988814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.988971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.988995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.989177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.989206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.989325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.989353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.989535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.989561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.989710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.989738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.989867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.989894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.990050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.990084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.990182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.990207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.990406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.990432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.990556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.990581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.990709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.990734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.990841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.990866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.990966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.990991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.991125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.991151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.991355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.991381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.991510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.991535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.991666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.426 [2024-07-25 23:39:04.991690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.426 qpair failed and we were unable to recover it. 00:33:07.426 [2024-07-25 23:39:04.991804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.991829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.991957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.991982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.992145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.992171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.992302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.992330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.992455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.992480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.992620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.992645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.992752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.992776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.992910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.992935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.993034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.993066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.993191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.993217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.993346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.993371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.993506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.993549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.993665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.993694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.993870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.993894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.994072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.994100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.994220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.994248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.994424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.994450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.994626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.994654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.994776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.994808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.994943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.994969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.995086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.995111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.995217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.995242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.995373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.995397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.995499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.995525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.995659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.995683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.995813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.995838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.995946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.995970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.996103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.996129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.996265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.996291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.996423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.996449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.996581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.996606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.996735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.996760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.996870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.996896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.997025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.997053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.997193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.997218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.997378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.427 [2024-07-25 23:39:04.997403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.427 qpair failed and we were unable to recover it. 00:33:07.427 [2024-07-25 23:39:04.997524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:04.997552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:04.997713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:04.997738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:04.997868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:04.997908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:04.998020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:04.998048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:04.998211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:04.998236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:04.998363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:04.998388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:04.998520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:04.998561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:04.998691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:04.998716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:04.998823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:04.998849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:04.998982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:04.999008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:04.999172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:04.999197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:04.999324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:04.999349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:04.999452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:04.999478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:04.999614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:04.999639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:04.999747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:04.999772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:04.999915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:04.999940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:05.000073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:05.000099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:05.000258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:05.000283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:05.000381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:05.000407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:05.000553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:05.000579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:05.000690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:05.000732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:05.000901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:05.000929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:05.001080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:05.001107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:05.001286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:05.001316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:05.001430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:05.001458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:05.001640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:05.001665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:05.001812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:05.001840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:05.001989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:05.002017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:05.002160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:05.002186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:05.002315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:05.002340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:05.002437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:05.002462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:05.002564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.428 [2024-07-25 23:39:05.002589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.428 qpair failed and we were unable to recover it. 00:33:07.428 [2024-07-25 23:39:05.002696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.002722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.002876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.002905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.003065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.003091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.003248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.003287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.003403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.003433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.003596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.003622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.003722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.003748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.003898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.003927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.004071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.004097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.004235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.004260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.004415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.004456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.004637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.004663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.004797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.004822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.004951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.004977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.005113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.005139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.005269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.005295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.005437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.005465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.005619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.005645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.005775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.005821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.005929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.005957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.006134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.006161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.006334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.006362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.006521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.006546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.006679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.006704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.006834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.006860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.006973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.007000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.007151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.007176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.007276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.007300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.007443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.007471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.007632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.007657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.007766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.007791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.007901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.007926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.008042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.008077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.008213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.008255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.008409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.008434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.429 [2024-07-25 23:39:05.008592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.429 [2024-07-25 23:39:05.008616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.429 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.008767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.008795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.008972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.009000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.009129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.009154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.009308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.009333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.009525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.009550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.009711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.009745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.009882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.009920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.010046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.010085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.010214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.010239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.010376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.010401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.010567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.010594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.010715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.010741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.010865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.010890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.011072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.011100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.011245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.011271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.011377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.011402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.011531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.011556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.011684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.011708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.011820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.011845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.011978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.012002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.012103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.012128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.012260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.012285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.012436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.012461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.012592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.012622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.012752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.012793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.012954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.012979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.013108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.013134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.013243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.013268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.013371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.013396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.013537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.013562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.013733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.013760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.013900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.013928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.014120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.014145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.014253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.014295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.014462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.014490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.014639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.014664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.014788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.014813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.014978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.015007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.430 qpair failed and we were unable to recover it. 00:33:07.430 [2024-07-25 23:39:05.015168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.430 [2024-07-25 23:39:05.015195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.015332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.015373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.015514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.015542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.015658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.015684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.015814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.015839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.015975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.016000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.016132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.016157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.016294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.016336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.016511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.016539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.016693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.016718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.016847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.016872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.017032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.017070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.017234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.017264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.017401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.017426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.017580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.017608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.017750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.017775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.017910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.017935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.018071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.018096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.018259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.018285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.018430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.018457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.018609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.018634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.018745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.018771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.018931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.018972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.019117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.019146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.019295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.019321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.019428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.019452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.019618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.019644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.019819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.019845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.019949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.019973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.020121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.020147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.020274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.020299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.020408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.020432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.020568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.020593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.020729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.020753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.020856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.020882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.021074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.021101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.021204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.021229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.021366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.021391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.021509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.021535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.021670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.021695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.021839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.021882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.431 [2024-07-25 23:39:05.022003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.431 [2024-07-25 23:39:05.022030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.431 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.022200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.022226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.022325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.022351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.022542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.022567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.022697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.022723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.022896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.022924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.023044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.023081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.023236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.023261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.023449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.023477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.023582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.023611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.023751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.023776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.023908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.023934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.024072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.024102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.024233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.024259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.024390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.024432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.024573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.024601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.024753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.024779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.024951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.024979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.025089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.025118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.025299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.025325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.025479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.025507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.025680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.025708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.025836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.025861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.025966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.025991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.026133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.026162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.026277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.026303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.026438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.026463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.026623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.026651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.026774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.026799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.026896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.026922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.027102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.027128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.027283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.027308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.027456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.027484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.027595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.027623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.027741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.027766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.027896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.027922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.028051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.028086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.028263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.028289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.028413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.028456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.028603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.028637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.028765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.028790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.432 [2024-07-25 23:39:05.028924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.432 [2024-07-25 23:39:05.028950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.432 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.029052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.029099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.029239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.029265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.029409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.029438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.029602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.029630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.029783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.029809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.029917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.029943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.030076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.030105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.030285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.030311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.030448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.030474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.030609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.030634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.030767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.030792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.030895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.030921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.031072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.031100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.031258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.031283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.031417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.031459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.031573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.031601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.031738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.031764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.031868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.031893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.032018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.032043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.032180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.032206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.032388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.032415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.032562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.032590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.032746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.032771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.032902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.032944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.033108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.033134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.033249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.033274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.033410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.033435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.033614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.033643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.033772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.033797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.033922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.033947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.034110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.433 [2024-07-25 23:39:05.034138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.433 qpair failed and we were unable to recover it. 00:33:07.433 [2024-07-25 23:39:05.034299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.034324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.034497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.034525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.034709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.034734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.034891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.034917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.035022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.035076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.035219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.035247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.035385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.035410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.035543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.035572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.035675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.035700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.035803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.035827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.035936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.035961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.036116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.036142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.036280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.036304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.036440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.036465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.036646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.036674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.036839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.036863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.036993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.037036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.037210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.037236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.037334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.037358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.037466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.037491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.037669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.037697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.037875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.037899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.038004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.038047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.038210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.038235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.434 [2024-07-25 23:39:05.038346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.434 [2024-07-25 23:39:05.038372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.434 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.038472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.038496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.038646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.038674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.038850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.038876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.039007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.039048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.039205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.039233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.039368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.039393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.039499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.039524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.039686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.039711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.039871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.039897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.040076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.040108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.040236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.040263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.040399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.040425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.040583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.040608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.040781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.040806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.040943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.040968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.041107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.041147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.041305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.041330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.041438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.041463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.041595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.041620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.041781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.041808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.041982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.042008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.042142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.042167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.042328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.042355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.042504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.042530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.042639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.042664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.042813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.042841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.042962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.042987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.043122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.043147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.043312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.043339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.043465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.043490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.043628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.043653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.435 qpair failed and we were unable to recover it. 00:33:07.435 [2024-07-25 23:39:05.043832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.435 [2024-07-25 23:39:05.043860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.043974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.044016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.044134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.044160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.044270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.044295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.044400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.044425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.044529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.044553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.044719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.044747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.044896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.044920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.045051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.045084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.045220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.045245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.045381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.045406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.045523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.045549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.045659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.045684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.045841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.045865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.045973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.045998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.046127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.046153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.046313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.046338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.046436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.046462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.046594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.046619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.046750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.046778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.046904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.046928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.047055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.047087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.047220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.047246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.047406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.047430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.047560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.047584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.047712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.047738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.047867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.047892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.048028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.048054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.048167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.048192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.048301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.048326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.048514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.048543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.048693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.436 [2024-07-25 23:39:05.048719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.436 qpair failed and we were unable to recover it. 00:33:07.436 [2024-07-25 23:39:05.048817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.048842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.048993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.049022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.049174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.049200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.049334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.049374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.049522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.049550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.049712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.049738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.049895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.049919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.050074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.050102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.050235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.050260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.050388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.050412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.050538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.050565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.050725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.050750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.050860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.050884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.051014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.051042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.051235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.051261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.051417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.051444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.051563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.051590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.051720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.051744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.051848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.051872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.051979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.052024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.052195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.052220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.052355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.052381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.052552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.052579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.052739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.052764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.052910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.052939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.053055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.053093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.053244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.053269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.053428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.053452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.053640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.053668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.053798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.053823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.053981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.054006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.054163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.437 [2024-07-25 23:39:05.054192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.437 qpair failed and we were unable to recover it. 00:33:07.437 [2024-07-25 23:39:05.054326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.054351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.054458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.054483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.054594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.054620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.054724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.054748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.054849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.054874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.055002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.055029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.055190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.055216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.055355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.055379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.055514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.055538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.055640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.055666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.055779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.055804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.055962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.055987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.056122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.056147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.056280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.056320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.056496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.056522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.056621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.056646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.056777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.056801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.056924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.056951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.057100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.057126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.057232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.057257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.057367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.057392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.057529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.057554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.057664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.057689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.057813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.057845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.057973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.057997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.058130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.058156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.438 [2024-07-25 23:39:05.058351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.438 [2024-07-25 23:39:05.058376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.438 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.058488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.058514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.058654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.058695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.058844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.058871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.059050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.059083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.059196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.059237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.059340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.059367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.059512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.059537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.059674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.059699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.059830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.059856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.060054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.060086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.060205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.060229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.060325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.060350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.060486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.060510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.060615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.060640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.060744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.060769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.060906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.060932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.061029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.061054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.061239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.061267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.061399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.061424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.061558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.061582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.061700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.061728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.061861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.061887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.061995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.062019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.062152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.062178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.062315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.062339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.062473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.062497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.062637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.062664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.062836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.062861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.063008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.063036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.063193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.063221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.063374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.063398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.439 qpair failed and we were unable to recover it. 00:33:07.439 [2024-07-25 23:39:05.063528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.439 [2024-07-25 23:39:05.063569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.063687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.063717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.063849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.063874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.064031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.064081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.064247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.064272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.064431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.064456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.064563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.064604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.064731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.064759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.064910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.064935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.065088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.065114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.065223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.065249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.065388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.065414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.065530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.065565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.065769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.065808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.065946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.065977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.066181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.066223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.066416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.066443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.066569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.066595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.066768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.066796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.066976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.067002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.067140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.067175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.067285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.067326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.067437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.067465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.067638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.067664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.067799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.067825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.067965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.067994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.068169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.068195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.068309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.068350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.068464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.068492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.068621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.068645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.068774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.068799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.068923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.068951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.440 [2024-07-25 23:39:05.069119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.440 [2024-07-25 23:39:05.069145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.440 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.069276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.069306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.069411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.069437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.069570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.069595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.069694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.069719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.069868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.069896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.070041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.070075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.070213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.070238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.070371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.070399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.070534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.070560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.070664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.070689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.070810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.070851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.071009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.071034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.071183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.071209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.071313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.071338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.071473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.071499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.071643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.071668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.071828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.071855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.072007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.072033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.072155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.072180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.072308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.072352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.072528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.072553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.072657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.072682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.072805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.072833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.072991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.073017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.073165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.073191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.073308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.073350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.073490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.073515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.073614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.073639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.073763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.073792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.073937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.073962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.441 qpair failed and we were unable to recover it. 00:33:07.441 [2024-07-25 23:39:05.074095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.441 [2024-07-25 23:39:05.074121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.074248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.074276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.074437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.074462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.074600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.074625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.074759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.074784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.074922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.074947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.075080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.075109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.075262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.075305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.075453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.075478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.075633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.075658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.075829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.075854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.075991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.076020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.076145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.076171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.076273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.076298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.076435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.076459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.076567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.076593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.076726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.076760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.076955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.076992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.077183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.077214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.077334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.077363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.077482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.077507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.077664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.077690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.077864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.077889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.078015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.078039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.078191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.078217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.078359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.078387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.078547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.078572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.078667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.078692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.078872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.078900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.079020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.079044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.079195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.079221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.079428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.079453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.442 qpair failed and we were unable to recover it. 00:33:07.442 [2024-07-25 23:39:05.079563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.442 [2024-07-25 23:39:05.079589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.079724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.079750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.079914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.079942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.080111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.080136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.080246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.080271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.080443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.080469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.080604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.080633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.080763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.080806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.080939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.080964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.081085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.081111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.081223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.081250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.081373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.081400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.081582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.081607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.081780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.081807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.081930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.081957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.082141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.082167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.082272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.082298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.082449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.082477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.082632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.082657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.082785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.082826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.082946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.082974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.083152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.083177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.083292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.083334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.083445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.083472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.083600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.083625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.083752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.083776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.083916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.083943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.443 [2024-07-25 23:39:05.084091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.443 [2024-07-25 23:39:05.084133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.443 qpair failed and we were unable to recover it. 00:33:07.726 [2024-07-25 23:39:05.084248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.726 [2024-07-25 23:39:05.084273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.726 qpair failed and we were unable to recover it. 00:33:07.726 [2024-07-25 23:39:05.084386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.726 [2024-07-25 23:39:05.084412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.726 qpair failed and we were unable to recover it. 00:33:07.726 [2024-07-25 23:39:05.084526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.726 [2024-07-25 23:39:05.084551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.726 qpair failed and we were unable to recover it. 00:33:07.726 [2024-07-25 23:39:05.084702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.726 [2024-07-25 23:39:05.084727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.726 qpair failed and we were unable to recover it. 00:33:07.726 [2024-07-25 23:39:05.084837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.726 [2024-07-25 23:39:05.084863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.726 qpair failed and we were unable to recover it. 00:33:07.726 [2024-07-25 23:39:05.084963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.726 [2024-07-25 23:39:05.084987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.726 qpair failed and we were unable to recover it. 00:33:07.726 [2024-07-25 23:39:05.085096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.726 [2024-07-25 23:39:05.085122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.726 qpair failed and we were unable to recover it. 00:33:07.726 [2024-07-25 23:39:05.085227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.726 [2024-07-25 23:39:05.085252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.726 qpair failed and we were unable to recover it. 00:33:07.726 [2024-07-25 23:39:05.085396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.726 [2024-07-25 23:39:05.085421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.726 qpair failed and we were unable to recover it. 00:33:07.726 [2024-07-25 23:39:05.085527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.726 [2024-07-25 23:39:05.085551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.726 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.085674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.085714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.085868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.085908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.086033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.086088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.086206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.086233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.086371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.086416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.086596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.086625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.086761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.086790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.086949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.086976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.087125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.087155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.087301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.087330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.087457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.087484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.087593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.087620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.087755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.087782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.087946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.087972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.088081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.088114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.088215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.088241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.088354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.088379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.088536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.088565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.088710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.088741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.088863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.088890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.089021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.089047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.089223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.089250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.089373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.089417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.089551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.089595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.089762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.089788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.089897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.089924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.090065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.090091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.090224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.090250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.090356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.090382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.090485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.090511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.090641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.727 [2024-07-25 23:39:05.090667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.727 qpair failed and we were unable to recover it. 00:33:07.727 [2024-07-25 23:39:05.090773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.090800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.090939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.090966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.091073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.091101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.091214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.091239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.091367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.091391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.091525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.091549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.091658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.091682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.091791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.091817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.091942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.091966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.092075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.092100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.092255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.092282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.092449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.092482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.092688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.092733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.092890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.092919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.093050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.093083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.093199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.093225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.093348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.093391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.093596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.093683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.093917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.093944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.094063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.094090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.094268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.094312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.094467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.094510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.094661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.094704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.094842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.094867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.094978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.095004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.095164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.095208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.095369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.095399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.095527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.095576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.095685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.095712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.095841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.095882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.096039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.096072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.096199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.728 [2024-07-25 23:39:05.096226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.728 qpair failed and we were unable to recover it. 00:33:07.728 [2024-07-25 23:39:05.096404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.096432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.096547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.096575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.096700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.096726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.096870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.096898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.097013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.097040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.097191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.097216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.097337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.097365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.097508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.097538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.097682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.097711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.097886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.097931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.098076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.098103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.098233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.098260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.098387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.098434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.098613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.098663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.098795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.098839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.098971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.098996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.099169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.099195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.099318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.099362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.099572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.099620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.099769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.099796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.099931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.099958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.100099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.100126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.100232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.100257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.100419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.100444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.100607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.100632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.100763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.100789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.100922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.100948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.101064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.101092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.101218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.101246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.101385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.101412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.101532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.101560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.729 qpair failed and we were unable to recover it. 00:33:07.729 [2024-07-25 23:39:05.101680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.729 [2024-07-25 23:39:05.101705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.101868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.101893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.101999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.102026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.102141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.102170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.102301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.102346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.102463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.102492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.102665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.102708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.102856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.102882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.103018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.103044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.103211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.103258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.103408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.103437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.103585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.103628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.103771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.103797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.103935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.103961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.104120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.104147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.104279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.104304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.104456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.104484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.104627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.104655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.104809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.104834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.104938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.104963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.105118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.105149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.105288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.105333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.105506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.105533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.105695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.105739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.105871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.105897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.105999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.106024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.106147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.106174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.106281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.106306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.106457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.106486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.106595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.106623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.106792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.106820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.106941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.730 [2024-07-25 23:39:05.106972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.730 qpair failed and we were unable to recover it. 00:33:07.730 [2024-07-25 23:39:05.107110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.107140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.107254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.107282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.107450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.107478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.107662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.107710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.107818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.107850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.108021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.108049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.108218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.108243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.108371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.108400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.108543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.108571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.108712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.108740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.108867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.108895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.109032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.109069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.109222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.109247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.109343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.109368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.109488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.109516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.109646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.109688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.109861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.109890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.110018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.110046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.110181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.110207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.110335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.110360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.110522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.110562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.110798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.110826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.110941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.110969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.111120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.111146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.111270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.111298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.111455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.111481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.111620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.111661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.111865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.111893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.112047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.112082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.112240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.112265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.112460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.112511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.112651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.731 [2024-07-25 23:39:05.112683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.731 qpair failed and we were unable to recover it. 00:33:07.731 [2024-07-25 23:39:05.112831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.112860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.113007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.113032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.113154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.113181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.113288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.113314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.113479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.113508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.113624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.113653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.113824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.113852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.113977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.114003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.114133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.114159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.114294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.114319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.114527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.114553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.114693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.114734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.114854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.114882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.115039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.115073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.115179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.115204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.115340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.115365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.115463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.115488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.115625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.115650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.115752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.115777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.115940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.115969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.116082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.116124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.116290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.116315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.116498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.116527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.116670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.116699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.116847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.116875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.117045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.732 [2024-07-25 23:39:05.117096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.732 qpair failed and we were unable to recover it. 00:33:07.732 [2024-07-25 23:39:05.117206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.117231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.117341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.117367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.117503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.117533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.117636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.117661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.117842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.117870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.117998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.118025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.118172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.118196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.118353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.118379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.118502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.118529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.118678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.118706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.118844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.118870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.119011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.119038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.119182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.119207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.119315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.119340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.119480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.119527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.119674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.119702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.119846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.119875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.120019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.120047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.120213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.120239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.120347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.120372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.120481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.120522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.120643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.120671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.120846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.120874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.121013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.121041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.121246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.121272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.121409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.121437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.121594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.121621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.121738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.121766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.121914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.121942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.122098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.122124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.122230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.122256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.122390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.733 [2024-07-25 23:39:05.122414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.733 qpair failed and we were unable to recover it. 00:33:07.733 [2024-07-25 23:39:05.122548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.122591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.122760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.122788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.122937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.122964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.123105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.123148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.123251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.123276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.123433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.123460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.123580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.123605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.123770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.123798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.123920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.123945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.124070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.124096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.124230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.124255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.124409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.124436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.124583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.124613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.124736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.124763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.124980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.125007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.125162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.125188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.125353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.125382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.125564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.125593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.125750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.125788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.125898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.125926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.126077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.126103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.126235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.126260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.126392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.126421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.126582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.126607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.126740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.126781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.126889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.126917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.127049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.127081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.127234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.127259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.127385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.127414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.128459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.128493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.128674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.128703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.734 [2024-07-25 23:39:05.128820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.734 [2024-07-25 23:39:05.128848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.734 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.129020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.129048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.129249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.129275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.129433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.129461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.129620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.129646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.129782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.129808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.129925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.129950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.130100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.130127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.130267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.130293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.130426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.130454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.130637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.130663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.130821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.130850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.131023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.131049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.131223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.131248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.131404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.131433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.131571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.131599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.131724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.131749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.131885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.131922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.132053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.132089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.132224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.132252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.132413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.132439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.132607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.132631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.132762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.132786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.132963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.132992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.133159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.133188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.133304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.133329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.133492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.133518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.133656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.133694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.133841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.133865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.134050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.134095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.134268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.134296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.134418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.134442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.735 [2024-07-25 23:39:05.134555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.735 [2024-07-25 23:39:05.134581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.735 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.134695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.134721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.134854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.134879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.135011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.135054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.135227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.135252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.135351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.135376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.135517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.135543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.135702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.135729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.135835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.135863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.136010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.136038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.136206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.136232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.136335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.136360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.136470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.136494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.136603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.136629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.136764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.136790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.136965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.137004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.137208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.137242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.137433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.137462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.137624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.137654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.137815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.137841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.137951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.137985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.138161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.138200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.138384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.138411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.138522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.138547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.138734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.138759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.138897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.138923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.139032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.139084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.139235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.139275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.139435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.139474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.139628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.139672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.139784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.139812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.139941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.139967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.736 [2024-07-25 23:39:05.140100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.736 [2024-07-25 23:39:05.140132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.736 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.140297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.140333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.140515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.140548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.140681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.140725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.140866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.140900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.141086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.141129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.141247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.141282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.141462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.141501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.141711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.141737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.141873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.141898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.142040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.142108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.142290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.142326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.142507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.142537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.142648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.142675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.142831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.142856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.142989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.143034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.143193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.143232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.143356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.143383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.143522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.143548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.143694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.143720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.143853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.143878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.144018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.144069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.144205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.144232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.144343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.144374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.144549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.144577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.144722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.144748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.144904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.144929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.737 [2024-07-25 23:39:05.145035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.737 [2024-07-25 23:39:05.145067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.737 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.145223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.145248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.145348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.145373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.145487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.145512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.145674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.145699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.145864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.145890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.146015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.146044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.146205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.146231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.146362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.146387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.146518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.146561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.146707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.146735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.146883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.146909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.147040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.147074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.147184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.147209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.147311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.147337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.147556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.147584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.147804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.147857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.148018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.148043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.148158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.148185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.148299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.148325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.148428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.148454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.148560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.148586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.148728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.148754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.148868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.148894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.149006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.149032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.149149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.149175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.149273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.149298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.149436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.149462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.149599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.149628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.149789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.149815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.149955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.149981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.150112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.738 [2024-07-25 23:39:05.150156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.738 qpair failed and we were unable to recover it. 00:33:07.738 [2024-07-25 23:39:05.150291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.150316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.150440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.150481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.150657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.150686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.150838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.150864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.151042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.151083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.151210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.151236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.151333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.151358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.151464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.151490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.151618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.151643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.151748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.151774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.152553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.152586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.152746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.152775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.152963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.152990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.153122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.153149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.153251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.153277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.153380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.153405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.153515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.153542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.153694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.153723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.153882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.153907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.154070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.154113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.154244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.154269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.154381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.154406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.154533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.154575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.154692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.154721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.154843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.154884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.155032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.155070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.155194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.155220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.155354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.155380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.155569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.155597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.155727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.155753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.155881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.155906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.739 [2024-07-25 23:39:05.156020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.739 [2024-07-25 23:39:05.156046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.739 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.156183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.156210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.156345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.156371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.156523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.156551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.156705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.156730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.156856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.156882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.156986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.157012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.157174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.157200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.157311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.157336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.157462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.157488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.157641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.157669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.157833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.157862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.157979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.158007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.158142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.158173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.158312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.158338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.158445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.158486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.158631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.158660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.158790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.158816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.158976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.159017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.159161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.159188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.159292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.159318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.159426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.159452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.159584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.159612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.159761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.159787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.159927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.159952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.160095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.160121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.160279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.160305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.160463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.160491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.160630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.160658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.160835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.160860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.161009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.161038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.740 [2024-07-25 23:39:05.161215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.740 [2024-07-25 23:39:05.161241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.740 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.161404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.161429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.161534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.161559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.161699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.161725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.161840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.161866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.161999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.162026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.162190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.162234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.162393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.162419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.162554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.162580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.162720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.162761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.162970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.162998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.163152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.163178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.163312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.163338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.163475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.163500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.163639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.163665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.163768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.163794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.163908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.163935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.164070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.164115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.164259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.164288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.164433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.164460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.164570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.164596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.164740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.164769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.164929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.164958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.165110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.165139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.165309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.165336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.165495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.165522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.165652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.165695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.165826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.165853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.166003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.166031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.166218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.166244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.166401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.166430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.166607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.166632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.741 qpair failed and we were unable to recover it. 00:33:07.741 [2024-07-25 23:39:05.166783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.741 [2024-07-25 23:39:05.166811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.166926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.166954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.167104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.167130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.167290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.167332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.167485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.167515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.167694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.167719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.167898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.167927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.168073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.168102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.168255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.168281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.168460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.168489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.168615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.168643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.168808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.168834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.168971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.169013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.169193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.169219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.169393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.169418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.169585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.169611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.169710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.169736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.169892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.169921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.170113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.170140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.170291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.170317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.170476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.170501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.170633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.170661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.170787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.170816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.171003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.171031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.171167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.171193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.171303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.171329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.171530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.171556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.171738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.171766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.171881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.171910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.172090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.172116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.172253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.172282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.172453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.742 [2024-07-25 23:39:05.172479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.742 qpair failed and we were unable to recover it. 00:33:07.742 [2024-07-25 23:39:05.172622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.172664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.172809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.172837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.172988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.173016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.173195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.173221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.173350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.173376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.173522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.173550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.173716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.173744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.173921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.173949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.174075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.174121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.174226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.174253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.174387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.174414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.174593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.174621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.174770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.174798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.174969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.174997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.175185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.175211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.175371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.175396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.175540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.175568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.175684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.175714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.175857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.175885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.176029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.176057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.176218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.176243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.176373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.176399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.176526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.176555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.176694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.176722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.176859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.176888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.177041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.177084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.177217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.743 [2024-07-25 23:39:05.177243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.743 qpair failed and we were unable to recover it. 00:33:07.743 [2024-07-25 23:39:05.177400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.177428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.177543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.177571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.177689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.177717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.177919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.177974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.178119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.178147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.178278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.178321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.178475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.178518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.178702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.178745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.178859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.178884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.179019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.179045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.179221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.179250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.179456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.179505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.179660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.179703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.179861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.179887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.180022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.180047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.180204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.180252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.180438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.180467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.180608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.180638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.180768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.180794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.180907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.180933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.181074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.181101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.181226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.181269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.181395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.181421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.181557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.181584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.181722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.181748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.181884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.181911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.182106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.182133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.182283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.182326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.182440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.182467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.182629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.182655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.182785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.182810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.744 [2024-07-25 23:39:05.182947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.744 [2024-07-25 23:39:05.182972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.744 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.183140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.183184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.183341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.183371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.183520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.183548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.183669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.183696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.183867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.183913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.184037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.184067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.184221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.184253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.184385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.184411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.184574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.184601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.184748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.184775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.184917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.184946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.185073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.185101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.185312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.185341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.185459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.185486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.185602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.185630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.185812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.185859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.185970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.185996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.186136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.186164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.186318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.186365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.186509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.186535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.186699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.186729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.186903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.186929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.187070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.187097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.187260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.187288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.187427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.187473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.187682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.187724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.187834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.187860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.187992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.188018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.188181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.188225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.188389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.188415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.188558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.745 [2024-07-25 23:39:05.188584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.745 qpair failed and we were unable to recover it. 00:33:07.745 [2024-07-25 23:39:05.188743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.188769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.188910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.188936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.189078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.189106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.189227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.189256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.189402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.189429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.189614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.189664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.189834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.189861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.190005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.190033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.190185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.190215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.190360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.190389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.190590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.190634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.190760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.190808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.190967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.190993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.191147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.191192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.191306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.191333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.191497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.191523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.191656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.191685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.191844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.191869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.191981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.192008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.192129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.192155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.192258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.192284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.192415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.192441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.192568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.192594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.192697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.192723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.192827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.192853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.192956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.192982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.193145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.193172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.193330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.193356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.193492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.193518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.193680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.193706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.193841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.193867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.746 [2024-07-25 23:39:05.193971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.746 [2024-07-25 23:39:05.193997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.746 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.194168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.194216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.194373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.194416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.194565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.194608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.194743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.194769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.194903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.194929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.195040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.195073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.195256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.195299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.195429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.195458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.195595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.195638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.195774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.195801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.195929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.195959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.196068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.196094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.196229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.196272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.196454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.196498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.196609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.196635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.196769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.196795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.196923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.196948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.197076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.197103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.197261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.197312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.197470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.197512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.197651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.197677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.197849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.197877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.197980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.198006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.198129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.198158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.198321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.198351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.198464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.198492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.198615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.198644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.198795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.198821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.198925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.198951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.199042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.199083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.199271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.199299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.747 [2024-07-25 23:39:05.199411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.747 [2024-07-25 23:39:05.199440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.747 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.199557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.199586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.199755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.199785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.199937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.199964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.200121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.200150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.200327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.200352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.200482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.200508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.200649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.200675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.200807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.200833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.200938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.200964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.201067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.201093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.201228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.201254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.201416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.201442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.201540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.201566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.201677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.201703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.201807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.201833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.201940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.201966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.202104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.202132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.202290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.202316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.202474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.202504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.202614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.202640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.202771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.202797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.202958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.202984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.203140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.203185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.203336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.203381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.203513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.203540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.203699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.203725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.748 qpair failed and we were unable to recover it. 00:33:07.748 [2024-07-25 23:39:05.203826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.748 [2024-07-25 23:39:05.203852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.204012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.204038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.204196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.204226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.204378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.204406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.204547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.204575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.204700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.204726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.204889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.204915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.205009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.205035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.205179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.205208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.205345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.205374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.205521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.205549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.205719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.205747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.205855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.205884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.206026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.206054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.206188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.206214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.206411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.206440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.206556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.206586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.206767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.206795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.206969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.206997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.207171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.207201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.207303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.207329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.207435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.207460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.207595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.207620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.207814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.207843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.207994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.208022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.208167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.208193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.208330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.208355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.208507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.208535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.208644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.208672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.208812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.208840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.208963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.208989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.209124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.209151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.749 qpair failed and we were unable to recover it. 00:33:07.749 [2024-07-25 23:39:05.209266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.749 [2024-07-25 23:39:05.209294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.209474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.209535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.209656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.209686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.209805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.209834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.209981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.210010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.210200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.210227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.210352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.210380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.210521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.210550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.210667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.210695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.210808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.210836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.210941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.210969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.211118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.211143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.211274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.211302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.211443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.211473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.211629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.211658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.211826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.211883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.212052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.212085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.212198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.212226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.212349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.212392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.212541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.212584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.212762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.212806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.212931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.212956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.213093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.213120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.213304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.213347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.213500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.213543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.213803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.213850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.213984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.214010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.214167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.214211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.214363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.214405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.214532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.214558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.214687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.214712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.214817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.750 [2024-07-25 23:39:05.214842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.750 qpair failed and we were unable to recover it. 00:33:07.750 [2024-07-25 23:39:05.214956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.214981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.215105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.215135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.215314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.215356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.215490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.215516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.215631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.215657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.215789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.215816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.215974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.215999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.216145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.216174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.216323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.216350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.216491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.216524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.216652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.216677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.216865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.216891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.217046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.217081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.217220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.217245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.217373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.217401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.217506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.217534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.217654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.217682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.217832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.217862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.218036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.218068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.218198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.218228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.218426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.218470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.218598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.218641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.218768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.218810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.218951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.218977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.219093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.219118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.219229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.219254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.219379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.219407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.219576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.219604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.219728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.219753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.219940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.219967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.220105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.220132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.220293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.751 [2024-07-25 23:39:05.220340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.751 qpair failed and we were unable to recover it. 00:33:07.751 [2024-07-25 23:39:05.220513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.220571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.220695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.220738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.220842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.220867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.220979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.221005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.221186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.221235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.221393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.221437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.221565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.221608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.221770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.221795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.221901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.221927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.222067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.222094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.222255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.222298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.222479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.222521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.222682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.222708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.222841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.222866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.222997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.223023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.223189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.223220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.223369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.223397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.223522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.223547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.223742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.223771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.223927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.223952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.224113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.224139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.224258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.224286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.224417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.224442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.224599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.224627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.224744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.224772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.224923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.224951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.225095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.225137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.225320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.225364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.225510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.225552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.225707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.225751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.225912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.225938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.752 [2024-07-25 23:39:05.226043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.752 [2024-07-25 23:39:05.226089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.752 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.226276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.226319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.226533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.226583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.226776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.226802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.226937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.226963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.227071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.227115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.227267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.227294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.227442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.227470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.227611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.227639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.227782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.227810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.227922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.227950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.228109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.228139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.228289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.228334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.228516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.228559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.228737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.228809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.228917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.228947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.229069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.229095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.229287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.229335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.229483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.229512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.229714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.229757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.229860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.229887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.230015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.230040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.230210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.230238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.230385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.230413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.230560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.230588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.230712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.230737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.230872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.230897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.231066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.231092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.231190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.231214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.231342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.231371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.231540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.231569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.231713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.231741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.753 [2024-07-25 23:39:05.231851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.753 [2024-07-25 23:39:05.231879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.753 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.232070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.232095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.232226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.232251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.232401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.232430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.232541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.232569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.232738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.232766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.232896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.232923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.233085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.233112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.233291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.233335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.233489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.233532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.233657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.233701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.233835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.233861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.234011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.234038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.234170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.234198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.234316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.234345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.234488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.234517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.234656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.234684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.234837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.234865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.235017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.235044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.235183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.235209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.235393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.235436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.235605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.235632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.235791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.235835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.235961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.235987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.236144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.236174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.236316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.236344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.236484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.754 [2024-07-25 23:39:05.236512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.754 qpair failed and we were unable to recover it. 00:33:07.754 [2024-07-25 23:39:05.236679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.236768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.236911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.236939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.237099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.237126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.237254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.237279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.237413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.237457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.237627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.237655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.237906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.237934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.238097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.238123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.238254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.238279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.238490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.238540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.238710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.238738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.238912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.238940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.239054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.239085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.239218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.239244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.239380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.239408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.239569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.239602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.239769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.239796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.239963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.239992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.240163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.240188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.240288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.240313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.240496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.240524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.240756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.240802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.240950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.240982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.241136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.241162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.241297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.241323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.241450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.241493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.241611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.241639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.241819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.241846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.241987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.242016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.242156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.755 [2024-07-25 23:39:05.242182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.755 qpair failed and we were unable to recover it. 00:33:07.755 [2024-07-25 23:39:05.242317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.242342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.242448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.242473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.242594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.242622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.242783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.242809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.242997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.243024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.243155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.243180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.243290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.243316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.243476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.243519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.243702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.243727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.243882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.243910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.244055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.244089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.244241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.244267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.244438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.244466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.244608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.244636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.244788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.244816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.244979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.245018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.245175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.245204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.245334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.245363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.245527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.245571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.245753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.245802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.245933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.245958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.246115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.246146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.246321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.246364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.246518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.246547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.246826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.246879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.247017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.247042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.247237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.247266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.247434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.247477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.247635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.247678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.247813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.247839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.247977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.248004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.756 qpair failed and we were unable to recover it. 00:33:07.756 [2024-07-25 23:39:05.248164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.756 [2024-07-25 23:39:05.248209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.248393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.248436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.248623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.248682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.248822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.248851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.248995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.249023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.249182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.249208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.249341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.249365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.249513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.249541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.249661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.249702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.249838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.249867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.250013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.250041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.250270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.250326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.250497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.250540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.250678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.250724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.250869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.250895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.251066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.251098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.251234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.251259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.251410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.251453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.251602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.251645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.251795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.251840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.251968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.251994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.252135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.252180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.252338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.252381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.252532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.252575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.252750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.252777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.252884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.252910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.253072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.253103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.253229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.253274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.253405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.253430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.253571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.757 [2024-07-25 23:39:05.253597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.757 qpair failed and we were unable to recover it. 00:33:07.757 [2024-07-25 23:39:05.253731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.253757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.253895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.253920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.254084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.254111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.254249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.254276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.254414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.254439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.254565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.254593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.254742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.254770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.254888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.254915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.255067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.255094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.255242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.255286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.255419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.255464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.255646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.255690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.255826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.255851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.255960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.255985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.256163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.256207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.256323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.256367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.256516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.256558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.256686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.256711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.256841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.256867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.257022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.257047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.257191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.257218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.257346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.257371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.257503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.257528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.257695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.257721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.257828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.257854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.257965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.257990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.258102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.258129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.258287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.258315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.258459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.258487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.258678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.258724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.258895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.258923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.259056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.758 [2024-07-25 23:39:05.259088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.758 qpair failed and we were unable to recover it. 00:33:07.758 [2024-07-25 23:39:05.259219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.259244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.259397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.259425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.259544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.259572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.259719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.259747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.259874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.259899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.260056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.260087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.260265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.260294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.260468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.260496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.260617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.260642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.260798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.260826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.260990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.261015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.261123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.261150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.261281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.261309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.261450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.261478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.261622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.261651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.261825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.261871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.262004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.262030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.262206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.262232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.262379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.262408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.262583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.262626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.262772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.262798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.262943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.262969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.263073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.263116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.263292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.263321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.263465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.759 [2024-07-25 23:39:05.263490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.759 qpair failed and we were unable to recover it. 00:33:07.759 [2024-07-25 23:39:05.263598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.263624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.263764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.263789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.263945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.263973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.264133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.264158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.264265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.264291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.264417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.264442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.264571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.264612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.264723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.264751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.264870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.264912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.265046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.265083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.265219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.265244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.265373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.265399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.265530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.265574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.265722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.265750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.265988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.266016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.266153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.266179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.266309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.266353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.266513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.266553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.266696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.266724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.266870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.266898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.267038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.267070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.267180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.267205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.267346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.267375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.267535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.267560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.267689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.267731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.267905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.267933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.268064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.268090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.268221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.268246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.268393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.268421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.268587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.268615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.268759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.268787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.760 [2024-07-25 23:39:05.268940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.760 [2024-07-25 23:39:05.268968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.760 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.269127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.269154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.269313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.269338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.269504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.269533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.269727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.269754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.269895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.269928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.270064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.270091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.270198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.270224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1536649 Killed "${NVMF_APP[@]}" "$@" 00:33:07.761 [2024-07-25 23:39:05.270354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.270380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.270532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.270560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:07.761 [2024-07-25 23:39:05.270777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.270805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:07.761 [2024-07-25 23:39:05.270961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.270990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.271122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.271147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:07.761 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:07.761 [2024-07-25 23:39:05.271307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.271332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.271479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.271506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.271656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.271684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.271862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.271895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.272042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.272076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.272197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.272223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.272331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.272356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.272458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.272484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.272614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.272644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.272780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.272809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.272932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.272959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.273124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.273151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.273259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.273284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.273419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.273445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.273586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.761 [2024-07-25 23:39:05.273614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.761 qpair failed and we were unable to recover it. 00:33:07.761 [2024-07-25 23:39:05.273763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.273791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.273901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.273928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.274071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.274097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.274232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.274258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.274363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.274388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.274507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.274535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.274681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.274710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.274856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.274884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1537258 00:33:07.762 [2024-07-25 23:39:05.275019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.275047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1537258 00:33:07.762 [2024-07-25 23:39:05.275177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.275202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.275358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1537258 ']' 00:33:07.762 [2024-07-25 23:39:05.275386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.762 [2024-07-25 23:39:05.275510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.275539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:07.762 [2024-07-25 23:39:05.275685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.275714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.762 [2024-07-25 23:39:05.275888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:07.762 [2024-07-25 23:39:05.275917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.276022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:07.762 [2024-07-25 23:39:05.276051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.276186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.276212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.276329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.276357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.276486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.276514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.276654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.276683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.276799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.276827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.276974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.277002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.277180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.277221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.277355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.277404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.277535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.277579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.277739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.277788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.277898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.277924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.278054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.278087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.762 [2024-07-25 23:39:05.278238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.762 [2024-07-25 23:39:05.278268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.762 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.278396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.278424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.278540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.278569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.278704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.278733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.278869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.278895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.278999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.279024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.279157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.279187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.279310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.279336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.279471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.279499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.279616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.279644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.279784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.279812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.279960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.279988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.280123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.280152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.280303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.280347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.280475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.280520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.280681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.280725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.280841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.280867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.280998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.281025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.281166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.281196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.281349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.281378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.281498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.281526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.281671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.281700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.281823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.281849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.281979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.282004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.282123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.282153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.282271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.282296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.282448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.282476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.282644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.282672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.282846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.282874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.283001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.283026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.283140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.283167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.283282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.283308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.763 [2024-07-25 23:39:05.283492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.763 [2024-07-25 23:39:05.283521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.763 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.283663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.283691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.283830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.283859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.284006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.284035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.284172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.284197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.284300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.284325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.284448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.284476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.284600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.284641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.284813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.284841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.284999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.285027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.285162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.285187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.285299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.285324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.285441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.285469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.285612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.285642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.285776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.285817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.285997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.286026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.286203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.286230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.286335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.286360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.286572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.286600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.286749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.286778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.286936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.286964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.287127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.287153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.287257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.287282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.287391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.287416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.287572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.287601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.287755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.287783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.287927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.287955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.288070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.288113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.288226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.288251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.288389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.288430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.288570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.288598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.288739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.288784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.764 qpair failed and we were unable to recover it. 00:33:07.764 [2024-07-25 23:39:05.288907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.764 [2024-07-25 23:39:05.288935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.289083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.289126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.289234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.289259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.289370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.289395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.289525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.289553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.289723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.289750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.289902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.289929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.290057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.290087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.290188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.290213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.290328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.290354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.290506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.290533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.290649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.290690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.290827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.290853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.290971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.290998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.291124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.291150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.291274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.291300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.291443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.291470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.291625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.291652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.291819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.291845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.291968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.291993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.292112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.292151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.292314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.292342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.292510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.292538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.292716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.292758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.292890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.292916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.293037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.293087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.293241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.765 [2024-07-25 23:39:05.293288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.765 qpair failed and we were unable to recover it. 00:33:07.765 [2024-07-25 23:39:05.293444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.293487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.293654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.293700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.293816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.293843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.293954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.293980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.294092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.294119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.294267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.294293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.294438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.294464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.294606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.294633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.294769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.294796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.294934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.294960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.295123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.295149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.295287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.295313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.295421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.295447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.295554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.295581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.295716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.295742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.295886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.295912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.296032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.296063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.296195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.296221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.296326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.296351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.296449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.296475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.296580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.296605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.296708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.296735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.296866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.296892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.297001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.297026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.297146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.297172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.297273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.297298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.297404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.297430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.297527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.297552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.297668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.297698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.297828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.297853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.297980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.298006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.298140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.766 [2024-07-25 23:39:05.298166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.766 qpair failed and we were unable to recover it. 00:33:07.766 [2024-07-25 23:39:05.298272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.298297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.298430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.298455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.298607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.298632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.298759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.298784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.298883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.298909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.299003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.299029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.299135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.299162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.299266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.299291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.299422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.299447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.299547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.299573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.299683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.299708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.299837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.299863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.299960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.299985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.300121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.300147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.300282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.300307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.300462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.300488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.300597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.300622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.300788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.300813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.300944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.300969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.301100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.301126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.301254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.301279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.301414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.301439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.301545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.301570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.301677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.301702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.301868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.301893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.302020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.302045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.302208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.302234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.302342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.302367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.302486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.302512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.302645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.302670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.302807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.302833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.302990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.767 [2024-07-25 23:39:05.303016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.767 qpair failed and we were unable to recover it. 00:33:07.767 [2024-07-25 23:39:05.303150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.303176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.303282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.303308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.303441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.303467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.303592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.303617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.303746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.303772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.303874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.303905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.304036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.304078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.304188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.304213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.304342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.304368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.304470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.304496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.304596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.304621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.304764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.304790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.304929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.304954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.305084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.305110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.305216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.305241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.305402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.305428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.305558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.305583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.305739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.305765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.305901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.305925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.306068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.306095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.306199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.306224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.306329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.306354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.306485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.306511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.306621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.306647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.306757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.306782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.306941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.306967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.307104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.307130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.307260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.307285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.307387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.307412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.307559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.307585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.307712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.307738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.307883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.768 [2024-07-25 23:39:05.307922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.768 qpair failed and we were unable to recover it. 00:33:07.768 [2024-07-25 23:39:05.308071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.308104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.308245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.308271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.308407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.308433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.308536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.308563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.308698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.308724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.308828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.308854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.308963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.308989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.309125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.309151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.309257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.309283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.309422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.309448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.309608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.309634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.309774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.309800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.309934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.309960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.310079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.310105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.310217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.310244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.310346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.310372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.310534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.310560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.310697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.310722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.310889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.310915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.311020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.311045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.311198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.311226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.311329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.311354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.311454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.311479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.311645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.311671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.311774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.311799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.311957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.311982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.312120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.312147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.312278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.312308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.312444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.312470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.312603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.312628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.312734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.312760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.312907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.312933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.313080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.313107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.313237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.769 [2024-07-25 23:39:05.313262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.769 qpair failed and we were unable to recover it. 00:33:07.769 [2024-07-25 23:39:05.313383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.313408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.313547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.313573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.313705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.313730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.313835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.313860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.313989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.314015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.314154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.314180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.314314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.314339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.314441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.314467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.314596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.314621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.314735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.314760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.314868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.314895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.315027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.315052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.315217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.315242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.315373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.315399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.315529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.315555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.315663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.315689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.315854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.315881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.316016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.316042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.316176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.316202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.316303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.316329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.316464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.316494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.316600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.316625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.316738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.316764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.316907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.316933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.317091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.317117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.317248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.317274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.317416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.317442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.317571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.317596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.317726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.317752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.770 [2024-07-25 23:39:05.317883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.770 [2024-07-25 23:39:05.317909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.770 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.318018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.318044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.318188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.318215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.318354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.318380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.318487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.318514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.318621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.318646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.318753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.318780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.318908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.318933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.319044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.319099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.319217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.319243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.319397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.319422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.319532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.319558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.319658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.319683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.319781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.319806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.319942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.319968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.320101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.320127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.320234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.320259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.320395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.320420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.320521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.320551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.320679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.320704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.320802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.320828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.320940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.320966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.321134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.321160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.321292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.321317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.321449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.321474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.321606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.321631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.321752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.321780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.321942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.321968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.322098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.322124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.322228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.322254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.322386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.322412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.322549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.322574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.322691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.322718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.322823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.322848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.322973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.771 [2024-07-25 23:39:05.322998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.771 qpair failed and we were unable to recover it. 00:33:07.771 [2024-07-25 23:39:05.323134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.323160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.323309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.323334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.323444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.323469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.323573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.323599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.323703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.323728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.323884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.323910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.324007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.324032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.324057] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:07.772 [2024-07-25 23:39:05.324130] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:07.772 [2024-07-25 23:39:05.324182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.324207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.324341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.324365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.324481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.324508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.324665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.324692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.324826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.324852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.325012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.325038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.325177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.325202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.325372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.325398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.325492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.325518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.325615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.325640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.325797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.325823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.325974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.326000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.326112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.326138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.326239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.326265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.326398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.326424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.326538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.326568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.326671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.326697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.326803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.326830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.326985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.327011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.327142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.327168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.327280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.327307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.327442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.327467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.327573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.327598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.327734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.327761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.327897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.327923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.328055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.328089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.772 [2024-07-25 23:39:05.328200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.772 [2024-07-25 23:39:05.328226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.772 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.328356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.328381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.328512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.328538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.328686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.328713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.328852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.328878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.328987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.329013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.329118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.329144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.329312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.329337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.329450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.329476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.329610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.329636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.329766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.329793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.329930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.329956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.330068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.330095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.330253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.330279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.330383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.330409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.330508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.330534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.330693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.330722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.330860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.330886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.331056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.331087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.331203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.331229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.331360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.331386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.331491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.331516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.331644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.331669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.331779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.331805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.331916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.331942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.332074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.332101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.332202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.332228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.332364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.332391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.332526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.332553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.332719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.332745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.332910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.332936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.333097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.333123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.333257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.333282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.333413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.333439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.333568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.333594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.333710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.333735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.333867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.333894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.334024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.773 [2024-07-25 23:39:05.334050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.773 qpair failed and we were unable to recover it. 00:33:07.773 [2024-07-25 23:39:05.334174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.334200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.334303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.334329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.334433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.334459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.334588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.334615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.334750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.334777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.334913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.334943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.335051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.335083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.335190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.335216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.335315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.335340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.335448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.335475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.335577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.335602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.335707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.335733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.335863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.335889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.336028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.336054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.336165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.336190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.336327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.336353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.336500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.336526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.336635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.336661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.336845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.336873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.337012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.337038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.337162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.337188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.337304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.337330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.337472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.337497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.337603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.337628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.337737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.337763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.337914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.337952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.338092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.338121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.338279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.338306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.338441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.338467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.338596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.338622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.338723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.338748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.338890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.338917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.339053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.339087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.339188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.339215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.339345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.339370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.339475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.339501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.339641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.339667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.774 qpair failed and we were unable to recover it. 00:33:07.774 [2024-07-25 23:39:05.339795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.774 [2024-07-25 23:39:05.339821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.339927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.339953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.340112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.340138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.340272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.340299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.340408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.340435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.340546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.340573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.340706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.340732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.340871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.340896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.341029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.341055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.341221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.341248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.341355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.341381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.341541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.341567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.341701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.341727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.341855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.341881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.341990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.342015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.342140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.342166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.342274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.342300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.342403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.342428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.342530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.342555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.342658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.342683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.342795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.342820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.342948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.342973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.343104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.343130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.343267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.343293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.343419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.343444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.343558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.343584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.343717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.343742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.343902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.343928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.344040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.344070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.344202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.344227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.344360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.344386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.344518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.775 [2024-07-25 23:39:05.344543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.775 qpair failed and we were unable to recover it. 00:33:07.775 [2024-07-25 23:39:05.344652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.344678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.344793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.344819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.344976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.345002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.345143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.345169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.345318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.345357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.345509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.345536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.345674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.345701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.345834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.345859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.345961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.345987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.346123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.346150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.346266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.346293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.346406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.346432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.346542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.346569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.346732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.346759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.346889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.346915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.347025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.347051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.347167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.347194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.347302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.347328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.347431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.347456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.347585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.347610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.347717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.347742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.347850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.347875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.348003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.348028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.348176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.348202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.348334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.348359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.348460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.348485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.348625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.348650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.348778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.348804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.348960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.348985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.349090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.349116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.349253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.349278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.349399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.349438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.349559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.349586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.349698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.349724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.349825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.349851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.349973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.350000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.350149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.776 [2024-07-25 23:39:05.350176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.776 qpair failed and we were unable to recover it. 00:33:07.776 [2024-07-25 23:39:05.350315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.350341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.350482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.350508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.350669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.350695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.350795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.350822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.350949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.350974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.351112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.351138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.351242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.351268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.351378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.351404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.351542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.351568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.351662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.351687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.351819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.351845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.351978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.352004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.352120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.352147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.352280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.352306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.352466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.352492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.352604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.352631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.352732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.352759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.352885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.352911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.353046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.353080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.353211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.353237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.353375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.353400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.353539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.353567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.353702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.353728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.353862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.353888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.354022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.354048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.354178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.354205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.354365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.354391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.354523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.354548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.354653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.354679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.354806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.354833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.354967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.354995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.355156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.355181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.355292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.355318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.355477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.355503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.355634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.355664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.355775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.355800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.355932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.355957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.356090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.356116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.777 [2024-07-25 23:39:05.356218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.777 [2024-07-25 23:39:05.356244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.777 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.356384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.356409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.356514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.356540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.356669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.356695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.356834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.356859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.356988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.357014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.357116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.357141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.357274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.357301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.357444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.357470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.357580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.357606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.357742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.357768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.357899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.357925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.358034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.358065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.358225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.358250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.358376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.358401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.358531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.358557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.358690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.358717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.358877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.358903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.359038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.359072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.359204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.359230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.359334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.359360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.359458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.359484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.359609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.359635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.359805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.359831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.359950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.359975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.360138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.360164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.360266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.360292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.360424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.360450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.360588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.360614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.360720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.360745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.360855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.360880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.360988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.361014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.361151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.361177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.361287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.361312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.361450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.361477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.361638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.361664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.361825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.361851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.361990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.362029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.362208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.362236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.362345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.362371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.778 [2024-07-25 23:39:05.362502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.778 [2024-07-25 23:39:05.362528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.778 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.362641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.362667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.362822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.362848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.362951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.362978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.363083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.363109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.363225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.363251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.363389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.363414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.363516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.363541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.363648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.363673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.363778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.363803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.363932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.363957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.364094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.364120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.364251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.364276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.779 [2024-07-25 23:39:05.364375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.364400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.364534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.364561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.364692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.364718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.364832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.364857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.364982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.365007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.365144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.365172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.365303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.365330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.365494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.365520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.365651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.365678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.365810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.365836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.365968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.365994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.366140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.366167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.366298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.366324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.366462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.366488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.366592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.366618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.366717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.366743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.366844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.366869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.366995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.367022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.367164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.367191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.367349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.367376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.367527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.367576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.367722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.367749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.367867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.367893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.368031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.368057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.368227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.368259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.368398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.368424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.368583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.368583] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:07.779 [2024-07-25 23:39:05.368610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.779 [2024-07-25 23:39:05.368740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.779 [2024-07-25 23:39:05.368767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.779 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.368922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.368948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.369091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.369119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.369227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.369252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.369360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.369386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.369517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.369543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.369703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.369729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.369831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.369856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.369973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.369999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.370131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.370157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.370260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.370289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.370396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.370421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.370519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.370544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.370680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.370705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.370840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.370865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.370969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.370995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.371149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.371175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.371304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.371329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.371439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.371464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.371569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.371594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.371726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.371752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.371889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.371914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.372048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.372079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.372187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.372215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.372360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.372387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.372524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.372551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.372664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.372690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.372798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.372824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.372934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.372960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.373076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.373102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.373240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.780 [2024-07-25 23:39:05.373265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.780 qpair failed and we were unable to recover it. 00:33:07.780 [2024-07-25 23:39:05.373386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.373411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.373515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.373540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.373638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.373664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.373798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.373823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.373930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.373955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.374082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.374108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.374211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.374241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.374370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.374396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.374528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.374553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.374681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.374706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.374830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.374857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.374962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.374987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.375093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.375120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.375256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.375283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.375417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.375443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.375579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.375604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.375744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.375771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.375878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.375903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.376005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.376030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.376194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.376220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.376340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.376365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.376467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.376493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.376626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.376652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.376759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.376784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.376917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.376942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.377047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.377077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.377178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.377203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.377315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.377340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.377452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.377479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.377576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.377602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.377736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.377762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.377862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.377887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.378000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.378025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.378139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.378165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.378272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.378298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.378434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.378460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.378585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.378610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.378713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.378738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.378839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.378864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.378966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.378992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.379102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.379130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.379294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.379320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.379426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.379453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.379585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.379611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.379742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.379768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.379875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.379901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.380015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.380041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.380210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.380250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.380395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.380421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.380531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.380557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.380691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.380716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.380878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.380903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.781 [2024-07-25 23:39:05.381006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.781 [2024-07-25 23:39:05.381033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.781 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.381142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.381168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.381314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.381339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.381468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.381494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.381603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.381629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.381761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.381787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.381916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.381954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.382092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.382119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.382257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.382288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.382384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.382411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.382513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.382539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.382669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.382694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.382806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.382832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.382938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.382965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.383083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.383109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.383219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.383244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.383372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.383397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.383537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.383563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.383666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.383693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.383802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.383831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.383968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.383994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.384157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.384182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.384329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.384356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.384491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.384516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.384652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.384677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.384804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.384830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.384967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.384992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.385103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.385131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.385241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.385267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.385400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.385425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.385559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.385585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.385707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.385746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.385861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.385888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.385993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.386020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.386134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.386161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.386281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.386310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.386415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.386441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.386549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.386575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.386679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.386704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.386845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.386869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.387011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.387036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.387204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.387230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.387337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.387362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.387470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.387495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.387629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.387654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.387766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.387792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.387963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.387988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.782 [2024-07-25 23:39:05.388126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.782 [2024-07-25 23:39:05.388151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.782 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.388311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.388338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.388503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.388530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.388670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.388696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.388825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.388851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.388964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.388990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.389158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.389185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.389322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.389348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.389481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.389506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.389614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.389640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.389802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.389827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.389929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.389954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.390096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.390123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.390287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.390312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.390417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.390442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.390582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.390609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.390746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.390771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.390929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.390968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.391081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.391108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.391220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.391246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.391382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.391408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.391536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.391562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.391685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.391724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.391893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.391919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.392056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.392087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.392224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.392249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.392381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.392406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.392540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.392566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.392674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.392704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.392805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.392831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.392966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.392992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.393137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.393163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.393305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.393331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.393509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.393535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.393671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.393696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.393839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.393864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.393993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.394018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.394137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.394163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.394278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.394304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.394409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.394434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.394536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.394562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.394693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.394718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.394876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.394915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.395035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.395070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.395211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.783 [2024-07-25 23:39:05.395237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.783 qpair failed and we were unable to recover it. 00:33:07.783 [2024-07-25 23:39:05.395345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.395371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.395528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.395553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.395712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.395738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.395852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.395879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.396011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.396036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.396165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.396203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.396323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.396351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.396462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.396488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.396649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.396675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.396784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.396810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.396920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.396946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.397091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.397119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.397231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.397256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.397403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.397429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.397444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:07.784 [2024-07-25 23:39:05.397533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.397559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.397684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.397709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.397840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.397866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.398007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.398038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.398176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.398202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.398310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.398335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.398494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.398530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.398665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.398691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.398821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.398855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.399002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.399029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.399141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.399168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.399297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.399322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.399460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.399485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.399617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.399643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.399789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.399816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.399957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.399982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.400109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.400148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.400265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.400292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.400424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.400449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.400555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.400581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.400690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.400715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.400853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.400878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.784 [2024-07-25 23:39:05.400989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.784 [2024-07-25 23:39:05.401020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.784 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.401191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.401218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.401329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.401355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.401455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.401481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.401641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.401666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.401789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.401829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.401971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.401998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.402110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.402137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.402268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.402294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.402430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.402456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.402614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.402640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.402768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.402794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.402920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.402945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.403100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.403126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.403264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.403290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.403388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.403414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.403543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.403568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.403702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.403727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.403833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.403858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.403962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.403987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.404102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.404130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.404263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.404288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.404422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.404447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.404555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.404580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.404685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.404710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.404810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.404835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.404971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.404996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.405137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.405168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.405302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.405327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.405486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.405512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.405648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.405674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.405781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.405806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.405914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.405939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.406104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.406131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.406270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.406296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.406438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.406464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.406622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.406649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.406753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.406778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.406937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.406976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.407107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.407146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.407256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.407283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.407392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.407420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.407584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.407609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.407747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.407774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.407910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.407935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.785 qpair failed and we were unable to recover it. 00:33:07.785 [2024-07-25 23:39:05.408069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.785 [2024-07-25 23:39:05.408095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.408212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.408239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.408353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.408379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.408516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.408542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.408673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.408699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.408861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.408887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.409042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.409102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.409224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.409251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.409407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.409433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.409564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.409595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.409699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.409725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.409833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.409858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.409978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.410005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.410170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.410197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.410297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.410323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.410461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.410487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.410647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.410673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.410808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.410833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.410954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.410993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.411132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.411160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.411298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.411324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.411486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.411512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.411627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.411652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.411795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.411822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.411951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.411977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.412085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.412112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.412241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.412268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.412378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.412404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.412565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.412590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.412737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.412763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.412900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.412926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.413082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.413109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.413240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.413267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.413376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.413403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.413512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.413539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.413668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.413695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.413871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.413910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.414035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.414068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.414175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.414201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.414306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.414332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.414472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.414498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.414600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.414625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.414787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.414814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.414946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.414972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.415075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.786 [2024-07-25 23:39:05.415102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.786 qpair failed and we were unable to recover it. 00:33:07.786 [2024-07-25 23:39:05.415234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.415260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.415368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.415395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.415499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.415525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.415654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.415680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.415790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.415815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.415965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.416006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.416129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.416158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.416302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.416327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.416487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.416513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.416644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.416670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.416801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.416828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.416951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.416977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.417090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.417129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.417269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.417295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.417409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.417435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.417573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.417599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.417706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.417733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.417846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.417872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.418018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.418056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.418209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.418235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.418337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.418363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.418523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.418550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.418657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.418682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.418788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.418816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.418959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.418987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.419118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.419146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.419282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.419307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.419432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.419458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.419597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.419622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.419736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.419762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.419889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.419922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.420034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.420072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.420207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.420233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.420362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.420388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.420494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.420519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.420649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.420675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.420835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.420861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.420964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.420990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.421112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.421150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.421318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.421344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.421477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.421502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.421612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.421638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.421745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.421771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.421872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.421898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.787 qpair failed and we were unable to recover it. 00:33:07.787 [2024-07-25 23:39:05.422026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.787 [2024-07-25 23:39:05.422051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.422173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.422199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.422334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.422360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.422496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.422521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.422655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.422680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.422808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.422833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.422952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.422991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.423140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.423168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.423275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.423302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.423438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.423464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.423572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.423598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.423706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.423732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.423870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.423896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.424035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.424067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.424205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.424237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.424373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.424399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.424509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.424535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.424671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.424698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.424868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.424894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.425030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.425056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.425208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.425234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.425341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.425366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.425469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.425496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.425604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.425630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.425736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.425762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.425898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.425924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.426038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.426082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.426204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.426230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.426365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.426391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:07.788 qpair failed and we were unable to recover it. 00:33:07.788 [2024-07-25 23:39:05.426503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.788 [2024-07-25 23:39:05.426529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.070 qpair failed and we were unable to recover it. 00:33:08.070 [2024-07-25 23:39:05.426667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.070 [2024-07-25 23:39:05.426694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.070 qpair failed and we were unable to recover it. 00:33:08.070 [2024-07-25 23:39:05.426833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.070 [2024-07-25 23:39:05.426860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.070 qpair failed and we were unable to recover it. 00:33:08.070 [2024-07-25 23:39:05.426966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.070 [2024-07-25 23:39:05.426993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.070 qpair failed and we were unable to recover it. 00:33:08.070 [2024-07-25 23:39:05.427123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.070 [2024-07-25 23:39:05.427150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.070 qpair failed and we were unable to recover it. 00:33:08.070 [2024-07-25 23:39:05.427252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.070 [2024-07-25 23:39:05.427279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.070 qpair failed and we were unable to recover it. 00:33:08.070 [2024-07-25 23:39:05.427421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.070 [2024-07-25 23:39:05.427448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.070 qpair failed and we were unable to recover it. 00:33:08.070 [2024-07-25 23:39:05.427570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.070 [2024-07-25 23:39:05.427596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.070 qpair failed and we were unable to recover it. 00:33:08.070 [2024-07-25 23:39:05.427707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.070 [2024-07-25 23:39:05.427734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.070 qpair failed and we were unable to recover it. 00:33:08.070 [2024-07-25 23:39:05.427836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.070 [2024-07-25 23:39:05.427863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.070 qpair failed and we were unable to recover it. 00:33:08.070 [2024-07-25 23:39:05.427979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.070 [2024-07-25 23:39:05.428018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.070 qpair failed and we were unable to recover it. 00:33:08.070 [2024-07-25 23:39:05.428143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.070 [2024-07-25 23:39:05.428183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.070 qpair failed and we were unable to recover it. 00:33:08.070 [2024-07-25 23:39:05.428321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.070 [2024-07-25 23:39:05.428361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.070 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.428476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.428505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.428615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.428642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.428781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.428808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.428917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.428944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.429069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.429098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.429239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.429265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.429369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.429394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.429531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.429557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.429698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.429725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.429829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.429854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.435073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.435104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.435267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.435297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.435532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.435560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.435679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.435706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.435842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.435868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.436647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.436677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.436852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.436878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.436983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.437010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.437144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.437170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.437277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.437303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.437472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.437497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.437607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.437633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.437769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.437796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.437908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.437933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.438115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.438155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.438300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.438328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.438441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.438467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.438605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.438632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.438769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.438795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.438899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.438925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.439085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.071 [2024-07-25 23:39:05.439111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.071 qpair failed and we were unable to recover it. 00:33:08.071 [2024-07-25 23:39:05.439246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.439272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.439432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.439459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.439570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.439596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.439739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.439766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.439922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.439949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.440053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.440084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.440197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.440223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.440390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.440416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.440774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.440802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.440972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.440999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.441170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.441197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.441307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.441332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.441507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.441533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.441645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.441671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.441802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.441828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.441961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.441987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.442105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.442132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.442264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.442291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.442460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.442485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.442592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.442618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.442739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.442765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.442898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.442924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.443067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.443093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.443225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.443251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.443386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.443414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.443545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.443571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.443675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.443700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.443818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.443845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.444007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.444032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.444149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.444175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.444307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.072 [2024-07-25 23:39:05.444334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.072 qpair failed and we were unable to recover it. 00:33:08.072 [2024-07-25 23:39:05.444487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.444513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.444617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.444642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.444758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.444783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.444910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.444935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.445073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.445104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.445214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.445239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.445373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.445399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.445529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.445555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.445688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.445714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.445872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.445898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.446030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.446057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.446201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.446228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.446340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.446365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.446528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.446554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.446678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.446704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.446829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.446870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.447008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.447035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.447188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.447214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.447352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.447385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.447494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.447519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.447620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.447649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.447763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.447789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.447921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.447948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.448114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.448154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.448270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.448298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.448466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.448492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.448622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.448648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.448750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.448777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.448907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.448933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.449036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.449068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.449200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.449226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.073 [2024-07-25 23:39:05.449337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.073 [2024-07-25 23:39:05.449363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.073 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.449502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.449528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.449629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.449655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.449754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.449779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.449918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.449946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.450103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.450141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.450289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.450318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.450483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.450510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.450617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.450643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.450747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.450773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.450911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.450938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.451096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.451122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.451259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.451285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.451398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.451428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.451539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.451566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.451727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.451753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.451868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.451894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.452009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.452035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.452200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.452226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.452339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.452369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.452531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.452556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.452691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.452717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.452860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.452886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.453040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.453092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.453239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.453266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.453402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.453437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.453562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.453589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.453701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.453727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.453845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.453871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.453973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.453999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.454120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.454146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.454255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.074 [2024-07-25 23:39:05.454280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.074 qpair failed and we were unable to recover it. 00:33:08.074 [2024-07-25 23:39:05.454390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.454416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.454521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.454546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.454678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.454703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.454838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.454864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.454973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.454999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.455124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.455150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.455254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.455280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.455413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.455444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.455573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.455604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.455726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.455764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.455903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.455930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.456036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.456082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.456214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.456241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.456367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.456393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.456494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.456520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.456661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.456688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.456818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.456844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.456969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.456995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.457097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.457124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.457234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.457259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.457374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.457399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.457510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.457535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.457682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.457708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.457809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.457836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.457966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.457992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.458128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.458155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.458272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.458298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.458408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.458433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.458591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.458617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.458723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.458749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.458888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.458916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.075 qpair failed and we were unable to recover it. 00:33:08.075 [2024-07-25 23:39:05.459055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.075 [2024-07-25 23:39:05.459090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.459253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.459279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.459426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.459452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.459583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.459608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.459711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.459742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.459858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.459884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.459996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.460022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.460131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.460157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.460306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.460331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.460463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.460488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.460593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.460619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.460722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.460747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.460907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.460932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.461045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.461096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.461266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.461293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.461416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.461442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.461605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.461631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.461789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.461816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.461922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.461949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.462087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.462114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.462247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.462273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.462418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.462444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.462600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.462625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.462732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.462757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.462887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.076 [2024-07-25 23:39:05.462912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.076 qpair failed and we were unable to recover it. 00:33:08.076 [2024-07-25 23:39:05.463006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.463032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.463165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.463204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.463345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.463375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.463488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.463516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.463652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.463679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.463801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.463828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.463968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.464001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.464167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.464194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.464308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.464334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.464440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.464466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.464571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.464596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.464727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.464752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.464880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.464906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.465040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.465080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.465178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.465203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.465299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.465325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.465457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.465482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.465609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.465635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.465749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.465787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.465906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.465934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.466044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.466082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.466223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.466250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.466352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.466378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.466537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.466564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.466669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.466695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.466832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.466857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.466954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.466980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.467078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.467105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.467213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.467241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.467358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.467385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.467497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.467523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.077 [2024-07-25 23:39:05.467684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.077 [2024-07-25 23:39:05.467709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.077 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.467840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.467866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.467998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.468028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.468153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.468180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.468315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.468341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.468470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.468496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.468628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.468653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.468761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.468787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.468892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.468918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.469076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.469102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.469204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.469229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.469354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.469379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.469513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.469540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.469647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.469673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.469809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.469845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.469948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.469974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.470104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.470130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.470242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.470268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.470372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.470397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.470526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.470552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.470678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.470704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.470810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.470836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.470973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.470998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.471130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.471156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.471295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.471321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.471454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.471480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.471610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.471636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.471772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.471798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.471925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.471950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.472084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.472110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.472247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.472273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.472370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.472395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.078 qpair failed and we were unable to recover it. 00:33:08.078 [2024-07-25 23:39:05.472551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.078 [2024-07-25 23:39:05.472576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.472705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.472731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.472858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.472883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.473002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.473041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.473182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.473222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.473369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.473396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.473533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.473559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.473671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.473699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.473831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.473858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.474018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.474044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.474169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.474194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.474304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.474330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.474494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.474521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.474631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.474657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.474826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.474852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.474980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.475006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.475112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.475138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.475240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.475265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.475430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.475456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.475595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.475621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.475756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.475782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.475919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.475944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.476084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.476110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.476210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.476235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.476383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.476422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.476593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.476621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.476718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.476744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.476846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.476872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.477000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.477039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.477199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.477227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.477392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.477419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.477548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.079 [2024-07-25 23:39:05.477574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.079 qpair failed and we were unable to recover it. 00:33:08.079 [2024-07-25 23:39:05.477733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.477758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.477871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.477897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.478035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.478072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.478181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.478207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.478349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.478375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.478483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.478509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.478615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.478640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.478748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.478774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.478903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.478928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.479040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.479072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.479216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.479244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.479374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.479399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.479535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.479561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.479664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.479689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.479821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.479847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.479969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.479995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.480149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.480189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.480319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.480357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.480480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.480506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.480603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.480629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.480764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.480791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.480925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.480951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.481068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.481095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.481200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.481225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.481349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.481386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.481485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.481511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.481673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.481698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.481822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.481848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.481960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.481986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.482147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.482174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.482271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.482298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.482457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.080 [2024-07-25 23:39:05.482483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.080 qpair failed and we were unable to recover it. 00:33:08.080 [2024-07-25 23:39:05.482620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.482647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.482791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.482818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.482925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.482951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.483087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.483114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.483248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.483274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.483408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.483433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.483533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.483559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.483696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.483722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.483819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.483845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.483978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.484004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.484132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.484159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.484294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.484319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.484444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.484469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.484593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.484619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.484753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.484779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.484943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.484969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.485070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.485096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.485231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.485258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.485412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.485451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.485590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.485617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.485755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.485781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.485889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.485915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.486045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.486082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.486219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.486245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.486352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.486379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.486512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.486538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.081 qpair failed and we were unable to recover it. 00:33:08.081 [2024-07-25 23:39:05.486666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.081 [2024-07-25 23:39:05.486692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.486795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.486822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.486938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.486968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.487095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.487134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.487276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.487303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.487417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.487443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.487550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.487576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.487708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.487736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.487898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.487924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.488070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.488098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.488233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.488259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.488359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.488384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.488485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.488510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.488644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.488669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.488778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.488803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.488916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.488941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.489053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.489085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.489230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.489256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.489361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.489386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.489497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.489523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.489631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.489658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.489761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.489788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.489887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.489913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.490035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.490084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.490195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.490224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.490322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.490348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.490447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.490473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.490570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.490596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.490636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:08.082 [2024-07-25 23:39:05.490667] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:08.082 [2024-07-25 23:39:05.490683] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:08.082 [2024-07-25 23:39:05.490700] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:08.082 [2024-07-25 23:39:05.490712] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:08.082 [2024-07-25 23:39:05.490724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.490749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.082 [2024-07-25 23:39:05.490791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:08.082 [2024-07-25 23:39:05.490853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.082 [2024-07-25 23:39:05.490879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.082 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.490842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:08.083 [2024-07-25 23:39:05.490869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:08.083 [2024-07-25 23:39:05.490871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:08.083 [2024-07-25 23:39:05.490993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.491021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.491199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.491227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.491363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.491394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.491526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.491553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.491657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.491684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.491806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.491833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.491964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.492003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.492131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.492170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.492285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.492313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.492419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.492450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.492588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.492614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.492712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.492738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.492878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.492905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.493018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.493052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.493184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.493223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.493342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.493372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.493480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.493507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.493643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.493670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.493777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.493805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.493908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.493935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.494051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.494100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.494215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.494242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.494348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.494373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.494512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.494537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.494657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.494684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.494791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.494819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.494949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.494977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.495091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.495120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.495224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.495250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.495413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.083 [2024-07-25 23:39:05.495439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.083 qpair failed and we were unable to recover it. 00:33:08.083 [2024-07-25 23:39:05.495535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.495562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.495698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.495724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.495856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.495883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.496011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.496038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.496146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.496172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.496280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.496307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.496423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.496450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.496567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.496593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.496720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.496746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.496849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.496875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.496969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.496996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.497131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.497157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.497267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.497294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.497441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.497468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.497572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.497598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.497696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.497722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.497823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.497849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.497997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.498036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.498183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.498211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.498321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.498352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.498463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.498490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.498624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.498650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.498753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.498780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.498918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.498944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.499046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.499080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.499189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.499215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.499341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.499368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.499467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.499493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.499599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.499625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.499728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.499756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.499920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.499949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.084 [2024-07-25 23:39:05.500055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.084 [2024-07-25 23:39:05.500088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.084 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.500194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.500220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.500335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.500361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.500496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.500522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.500652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.500678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.500778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.500804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.500943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.500976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.501096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.501123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.501233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.501259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.501376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.501402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.501539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.501565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.501728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.501754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.501866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.501892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.502033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.502072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.502182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.502208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.502361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.502401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.502513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.502540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.502654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.502680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.502786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.502812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.502916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.502942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.503054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.503090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.503226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.503253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.503386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.503412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.503544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.503571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.503674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.503700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.503849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.503888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.504032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.504068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.504184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.504212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.504346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.504377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.504479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.504505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.504608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.504635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.504767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.504793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.085 qpair failed and we were unable to recover it. 00:33:08.085 [2024-07-25 23:39:05.504935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.085 [2024-07-25 23:39:05.504973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.505114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.505153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.505263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.505290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.505389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.505416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.505515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.505541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.505678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.505706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.505813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.505839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.505950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.505977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.506085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.506112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.506219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.506244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.506352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.506378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.506482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.506508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.506642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.506668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.506797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.506835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.506937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.506964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.507074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.507101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.507201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.507227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.507340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.507369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.507469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.507496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.507629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.507655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.507755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.507781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.507924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.507963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.508081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.508110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.508223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.508254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.508418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.508444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.508546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.508573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.508676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.508703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.508815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.508841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.508995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.509025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.509157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.509184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.509295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.509322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.509440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.086 [2024-07-25 23:39:05.509466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.086 qpair failed and we were unable to recover it. 00:33:08.086 [2024-07-25 23:39:05.509609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.509636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.509742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.509769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.509886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.509914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.510018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.510044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.510164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.510191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.510329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.510355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.510452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.510478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.510591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.510617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.510725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.510752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.510860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.510886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.511049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.511084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.511188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.511214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.511317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.511343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.511472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.511497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.511603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.511629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.511768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.511794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.511951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.511989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.512107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.512135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.512248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.512275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.512411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.512436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.512542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.512568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.512674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.512700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.512804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.512829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.512975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.513014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.513137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.513176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.087 qpair failed and we were unable to recover it. 00:33:08.087 [2024-07-25 23:39:05.513289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-07-25 23:39:05.513316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.513462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.513489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.513621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.513647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.513759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.513786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.513885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.513910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.514026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.514071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.514217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.514249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.514362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.514390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.514497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.514523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.514634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.514659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.514765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.514790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.514888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.514912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.515018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.515043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.515204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.515230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.515339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.515366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.515474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.515500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.515606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.515632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.515775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.515800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.515917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.515942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.516054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.516088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.516205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.516231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.516338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.516365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.516499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.516526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.516663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.516688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.516828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.516855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.516961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.516987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.517125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.517152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.517264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.517290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.517421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.517447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.517589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.517616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.517718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.517744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.088 [2024-07-25 23:39:05.517855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-07-25 23:39:05.517882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.088 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.517991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.518016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.518130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.518169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.518277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.518304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.518432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.518458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.518561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.518587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.518681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.518706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.518841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.518867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.518967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.518992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.519134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.519162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.519279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.519317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.519456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.519483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.519624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.519650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.519763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.519789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.519890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.519916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.520019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.520045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.520180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.520207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.520316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.520343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.520485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.520512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.520661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.520687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.520831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.520870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.521014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.521042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.521188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.521216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.521322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.521348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.521455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.521481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.521581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.521606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.521720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.521749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.521856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.521882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.521989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.522014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.522134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.522162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.522306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.522344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.522463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.522491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.089 [2024-07-25 23:39:05.522628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.089 [2024-07-25 23:39:05.522655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.089 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.522761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.522787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.522930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.522969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.523086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.523113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.523251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.523277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.523421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.523446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.523553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.523579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.523689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.523715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.523814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.523839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.523963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.523989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.524126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.524165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.524287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.524315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.524483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.524514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.524617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.524644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.524748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.524775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.524888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.524914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.525030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.525056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.525172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.525198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.525302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.525328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.525429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.525454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.525550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.525576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.525678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.525703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.525861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.525886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.526014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.526053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.526187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.526215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.526321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.526349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.526462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.526488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.526629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.526654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.526758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.526783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.526898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.526924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.527019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.527045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.527155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.090 [2024-07-25 23:39:05.527181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.090 qpair failed and we were unable to recover it. 00:33:08.090 [2024-07-25 23:39:05.527286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.527312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.527413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.527439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.527540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.527566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.527664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.527691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.527796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.527822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.527949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.527979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.528091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.528117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.528262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.528301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.528422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.528449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.528567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.528593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.528759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.528785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.528893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.528918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.529053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.529087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.529191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.529217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.529320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.529346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.529454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.529479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.529587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.529612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.529717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.529743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.529846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.529871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.529981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.530006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.530146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.530172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.530275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.530301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.530432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.530458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.530588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.530613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.530716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.530744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.530845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.530872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.530989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.531028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.531146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.531172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.531305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.531331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.531458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.531483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.531619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.531644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.091 [2024-07-25 23:39:05.531742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.091 [2024-07-25 23:39:05.531768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.091 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.531893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.531937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.532099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.532138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.532263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.532301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.532449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.532475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.532576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.532602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.532713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.532738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.532848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.532874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.533002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.533028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.533187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.533216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.533323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.533350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.533487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.533513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.533619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.533646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.533753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.533779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.533914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.533940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.534055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.534086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.534189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.534215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.534323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.534349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.534454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.534479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.534583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.534609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.534719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.534758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.534869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.534896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.535001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.535028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.535129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.535155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.535263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.535289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.535436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.535462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.535625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.535652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.092 qpair failed and we were unable to recover it. 00:33:08.092 [2024-07-25 23:39:05.535758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.092 [2024-07-25 23:39:05.535783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.535883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.535913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.536038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.536070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.536171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.536196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.536295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.536320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.536464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.536489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.536603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.536628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.536755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.536780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.536912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.536938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.537044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.537080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.537191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.537218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.537321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.537347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.537451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.537477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.537635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.537661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.537775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.537801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.537922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.537947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.538071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.538097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.538204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.538230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.538335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.538360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.538472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.538497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.538595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.538622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.538741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.538767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.538878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.538905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.539009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.539036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.539160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.539198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.539316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.539343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.539467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.539493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.539590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.539616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.539730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.539765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.539869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.539894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.540008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.540035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.093 [2024-07-25 23:39:05.540174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.093 [2024-07-25 23:39:05.540212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.093 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.540327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.540364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.540502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.540527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.540634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.540660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.540763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.540789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.540919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.540945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.541045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.541087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.541200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.541225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.541330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.541357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.541469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.541494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.541600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.541627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.541737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.541762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.541871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.541897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.542033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.542074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.542186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.542213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.542322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.542348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.542480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.542506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.542608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.542634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.542803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.542842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.542991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.543019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.543129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.543156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.543291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.543317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.543426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.543452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.543589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.543614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.543730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.543757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.543887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.543912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.544018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.544044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.544153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.544179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.544284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.544310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.544416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.544441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.544544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.544571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.544716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.094 [2024-07-25 23:39:05.544755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.094 qpair failed and we were unable to recover it. 00:33:08.094 [2024-07-25 23:39:05.544862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.544889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.544994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.545019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.545135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.545163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.545306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.545331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.545429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.545455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.545564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.545594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.545702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.545728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.545832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.545857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.545988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.546014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.546149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.546187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.546293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.546320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.546429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.546454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.546564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.546590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.546685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.546711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.546813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.546839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.546950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.546978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.547088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.547115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.547217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.547242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.547375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.547401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.547504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.547530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.547641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.547679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.547783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.547810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.547917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.547945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.548075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.548101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.548203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.548229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.548327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.548353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.548457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.548483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.548593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.548619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.548751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.548777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.548957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.548983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.549108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.549147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.549285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.095 [2024-07-25 23:39:05.549324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.095 qpair failed and we were unable to recover it. 00:33:08.095 [2024-07-25 23:39:05.549481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.549513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.549618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.549644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.549787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.549813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.549978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.550017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.550145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.550172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.550276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.550302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.550429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.550454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.550565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.550591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.550702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.550732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.550862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.550888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.551020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.551046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.551160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.551186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.551320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.551345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.551441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.551467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.551585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.551612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.551720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.551746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.551855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.551881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.552009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.552036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.552162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.552200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.552318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.552345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.552451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.552477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.552634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.552660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.552757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.552783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.552921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.552947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.553053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.553086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.553227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.553253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.553356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.553381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.553512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.553539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.553651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.553676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.553779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.553806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.553902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.553928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.096 [2024-07-25 23:39:05.554083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.096 [2024-07-25 23:39:05.554110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.096 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.554225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.554252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.554351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.554377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.554481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.554508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.554608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.554634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.554739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.554764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.554919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.554958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.555074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.555100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.555200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.555226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.555354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.555380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.555489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.555515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.555615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.555641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.555740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.555766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.555917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.555956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.556097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.556124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.556234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.556261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.556371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.556397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.556525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.556550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.556663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.556689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.556794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.556821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.556963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.556989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.557128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.557167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.557289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.557316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.557421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.557447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.557560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.557585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.557697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.557724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.557824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.557850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.557978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.558004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.558111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.558137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.558244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.558270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.558379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.558405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.558529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.097 [2024-07-25 23:39:05.558555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.097 qpair failed and we were unable to recover it. 00:33:08.097 [2024-07-25 23:39:05.558651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.558678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.558790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.558816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.558918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.558945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.559081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.559107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.559206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.559236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.559349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.559374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.559523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.559548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.559651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.559679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.559815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.559841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.559999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.560038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.560156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.560184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.560284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.560309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.560418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.560445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.560602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.560628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.560762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.560788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.560894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.560920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.561023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.561049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.561158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.561183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.561295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.561320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.561429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.561455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.561578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.561603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.561704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.561729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.561850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.561888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.562002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.562030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.562150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.562176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.562281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.098 [2024-07-25 23:39:05.562306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.098 qpair failed and we were unable to recover it. 00:33:08.098 [2024-07-25 23:39:05.562438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.562464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.562599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.562625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.562730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.562756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.562891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.562919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.563070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.563097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.563202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.563233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.563339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.563365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.563496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.563521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.563634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.563660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.563771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.563797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.563901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.563929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.564035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.564069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.564175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.564200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.564336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.564362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.564520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.564545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.564648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.564675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.564780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.564805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.564906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.564932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.565037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.565070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.565186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.565212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.565316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.565341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.565445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.565470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.565600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.565625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.565724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.565748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.565847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.565872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.565978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.566003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.566143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.566170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.566265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.566290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.566400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.566425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.566541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.566566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.566699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.566725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.099 [2024-07-25 23:39:05.566830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.099 [2024-07-25 23:39:05.566855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.099 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.567015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.567054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.567193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.567222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.567340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.567367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.567481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.567508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.567611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.567637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.567751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.567776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.567911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.567937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.568052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.568085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.568188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.568215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.568325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.568350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.568462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.568487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.568586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.568611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.568723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.568748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.568845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.568881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.569037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.569082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.569208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.569256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.569368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.569395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 A controller has encountered a failure and is being reset. 00:33:08.100 [2024-07-25 23:39:05.569548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.569574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.569678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.569705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.569843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.569868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.569961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.569987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.570091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.570118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.570222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.570248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.570354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.570380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.570477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.570503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.570615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.570640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.570754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.570780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.570914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.570940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.571047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.571082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.571188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.571214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.571323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.571349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.100 qpair failed and we were unable to recover it. 00:33:08.100 [2024-07-25 23:39:05.571453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.100 [2024-07-25 23:39:05.571478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.571603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.571628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.571728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.571753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.571863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.571889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.571995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.572024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.572136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.572162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.572301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.572326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.572422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.572447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.572543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.572568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.572676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.572701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.572803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.572829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.572926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.572951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.573056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.573087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.573200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.573226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.573340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.573366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.573467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.573492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.573602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.573627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.573729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.573754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.573853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.573878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.573981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.574007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.574111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.574137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.574244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.574271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.574378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.574403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.574512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.574537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.574640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.574666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.574781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.574810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.574929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.574955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.575056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.575087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.575199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.575225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.575318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.575345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.575450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.575477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.575584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.101 [2024-07-25 23:39:05.575610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.101 qpair failed and we were unable to recover it. 00:33:08.101 [2024-07-25 23:39:05.575758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.575787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.575918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.575944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.576047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.576077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.576208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.576234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.576367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.576392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.576497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.576522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.576627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.576652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.576789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.576815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.576926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.576953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.577066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.577093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.577192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.577218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.577348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.577374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.577472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.577498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.577641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.577666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.577764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.577790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.577924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.577949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.578054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.578086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.578203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.578247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.578390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.578416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.578542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.578567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.578695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.578725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.578824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.578849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.578978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.579003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.579110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.579136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.579242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.579267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.579394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.579419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.579523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.579548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.579646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.579671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.579777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.579802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.579962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.579987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.580114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.580140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.102 [2024-07-25 23:39:05.580245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.102 [2024-07-25 23:39:05.580271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.102 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.580368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.580393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.580499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.580524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.580630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.580655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.580782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.580807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.580911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.580939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.581068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.581108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.581223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.581250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.581393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.581419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.581525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.581551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.581676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.581701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.581831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.581857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.581958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.581983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.582101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.582132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.582246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.582272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.582368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.582393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.582503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.582528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.582626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.582651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.582775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.582800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.582906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.582932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.583055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.583101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.583203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.583229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.583333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.583357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.583482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.583507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.583613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.583640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.583745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.583770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.583875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.583901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.103 qpair failed and we were unable to recover it. 00:33:08.103 [2024-07-25 23:39:05.584010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.103 [2024-07-25 23:39:05.584036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.584155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.584181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.584292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.584317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.584424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.584449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.584551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.584575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.584678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.584703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.584802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.584827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.584950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.584975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.585092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.585118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.585225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.585253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.585356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.585382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.585488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.585514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.585617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.585643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.585745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.585774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.585904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.585930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.586033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.586063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.586171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.586196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.586327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.586352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.586454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.586479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.586614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.586640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.586766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.586791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.586893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.586921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.587018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.587044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.587167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.587194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.587324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.587349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.587484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.587510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.587615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.587640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.587761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.587786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.587889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.587916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.588016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.588042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.588157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.588187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.588290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.104 [2024-07-25 23:39:05.588316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.104 qpair failed and we were unable to recover it. 00:33:08.104 [2024-07-25 23:39:05.588415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.588440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.588544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.588569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.588667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.588692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.588797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.588822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.588931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.588956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.589054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.589086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.589191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.589216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.589324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.589349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.589451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.589480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.589591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.589618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.589774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.589800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.589901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.589926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.590031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.590056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.590162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.590187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.590326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.590351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.590461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.590487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.590587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.590612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.590706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.590731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.590844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.590869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.590968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.590993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.591089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.591116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.591225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.591250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.591394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.591419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.591521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.591546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.591647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.591672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.591769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.591795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.591886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.591911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.592032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.592077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.592189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.592216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.592327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.592354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.592459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.592485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.105 [2024-07-25 23:39:05.592626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.105 [2024-07-25 23:39:05.592652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.105 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.592759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.592785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.592900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.592926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.593038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.593070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.593184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.593215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.593325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.593352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.593455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.593480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.593616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.593642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.593749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.593774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.593883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.593909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.594033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.594077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.594197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.594224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.594369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.594395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.594505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.594531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.594636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.594661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.594763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.594790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.594897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.594922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.595056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.595091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.595229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.595256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.595367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.595393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.595522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.595548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.595659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.595685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.595791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.595816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.595956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.595982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.596113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.596140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.596279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.596305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.596414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.596440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.596571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.596596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.596714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.596740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.596843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.596869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.596975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.597004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.597137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.597176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.597326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.597364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.597484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.597511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.106 qpair failed and we were unable to recover it. 00:33:08.106 [2024-07-25 23:39:05.597616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.106 [2024-07-25 23:39:05.597641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.597746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.597773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.597871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.597897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.598009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.598034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.598149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.598174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.598276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.598301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.598411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.598441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.598552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.598578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.598674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.598700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.598809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.598834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.598937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.598963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.599105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.599131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.599230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.599255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.599357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.599382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.599477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.599502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.599598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.599623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.599729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.599754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.599852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.599878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.600006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.600032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.600151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.600176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.600277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.600303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.600402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.600428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.600526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.600552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.600651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.600675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.600774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.600807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.600912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.600938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.601077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.601115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.601239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.601266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.601398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.601424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.601565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.601590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.601686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.601711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.601814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.601841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.601946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.601972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.602075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.602102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.602208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.107 [2024-07-25 23:39:05.602233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.107 qpair failed and we were unable to recover it. 00:33:08.107 [2024-07-25 23:39:05.602346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.602371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.602469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.602495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.602622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.602647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.602758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.602784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.602884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.602909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.603023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.603070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.603217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.603244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.603368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.603395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.603493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.603519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.603656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.603682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.603788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.603813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.603923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.603948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.604064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.604090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.604203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.604229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.604336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.604362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.604502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.604528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.604663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.604688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.604783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.604808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.604946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.604971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.605072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.605098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.605207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.605233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.605333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.605358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.605463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.605489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.605588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.605614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.605729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.605755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.605863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.605888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.606023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.606049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.606165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.606191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.606294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.606320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.606438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.606468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.606575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.606600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.606735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.606760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.606856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.606881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.606992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.108 [2024-07-25 23:39:05.607017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.108 qpair failed and we were unable to recover it. 00:33:08.108 [2024-07-25 23:39:05.607124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.607150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.607284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.607310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.607408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.607433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.607532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.607558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.607658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.607684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.607787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.607813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.607926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.607951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.608101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.608140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.608263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.608301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.608422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.608450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.608564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.608590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.608694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.608720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.608824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.608849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.608964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.609003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.609110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.609138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.609248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.609274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.609372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.609398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.609510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.609536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.609637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.609663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.609782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.609807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.609909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.609935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.610042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.610077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.610194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.610220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.610322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.610348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.610448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.610474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.610574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.610599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.610717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.610743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.610882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.610908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.611014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.109 [2024-07-25 23:39:05.611040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.109 qpair failed and we were unable to recover it. 00:33:08.109 [2024-07-25 23:39:05.611164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.611191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.611296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.611321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.611421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.611447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.611549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.611575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.611708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.611734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.611833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.611859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.611979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.612017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.612154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.612192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.612308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.612336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.612443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.612469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.612572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.612598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.612698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.612724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.612831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.612858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.612963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.612989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.613090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.613117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.613226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.613252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.613350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.613375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.613478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.613504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.613606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.613631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.613761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.613787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.613897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.613922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.614032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.614064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.614170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.614196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.614308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.614334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.614465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.614491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.614602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.614628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.614727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.614752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.614896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.614926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.615041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.615083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.615197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.615223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.615326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.615352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.615466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.615491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.615587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.615612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.110 [2024-07-25 23:39:05.615746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.110 [2024-07-25 23:39:05.615776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.110 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.615892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.615918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.616024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.616049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.616179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.616205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.616305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.616331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.616425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.616450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.616555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.616580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.616676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.616702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.616802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.616828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.616957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.616982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.617084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.617110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.617223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.617249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.617360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.617386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.617493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.617518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.617678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.617704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.617814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.617840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.617947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.617972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.618086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.618112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:08.111 [2024-07-25 23:39:05.618215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.618242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:33:08.111 [2024-07-25 23:39:05.618351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.618376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:08.111 [2024-07-25 23:39:05.618498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.618524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:08.111 [2024-07-25 23:39:05.618665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.618690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:08.111 [2024-07-25 23:39:05.618791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.618817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.618925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.618951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.619052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.619090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.619196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.619222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.619334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.619361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.619460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.619486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.619605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.619632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.619745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.619771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.619877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.619903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.620021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.620047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.620162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.620188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.620292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.620318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.620454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.111 [2024-07-25 23:39:05.620480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.111 qpair failed and we were unable to recover it. 00:33:08.111 [2024-07-25 23:39:05.620579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.620605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.620705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.620730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.620843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.620870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.621001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.621041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc0c000b90 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.621183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.621221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.621345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.621381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.621506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.621532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.621641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.621667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.621804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.621829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.621946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.621974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.622075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.622101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.622206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.622232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.622342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.622375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.622489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.622515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.622622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.622648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.622806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.622831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.622928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.622953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.623068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.623095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.623209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.623234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.623333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.623366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.623469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.623494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.623638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.623664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.623779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.623804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.623917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.623942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.624073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.624099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.624201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.624227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.624326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.624352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.624461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.624486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.624615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.624641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.624740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.624765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.624867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.624899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.625000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.625026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.625149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.625175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.625274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.625299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.625434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.625467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.625566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.625591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.112 [2024-07-25 23:39:05.625691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.112 [2024-07-25 23:39:05.625717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.112 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.625811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.625836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.625937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.625963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.626068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.626094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.626199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.626224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.626325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.626351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.626445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.626470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.626577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.626602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.626769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.626794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.626906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.626931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.627033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.627076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.627174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.627199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.627305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.627330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.627440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.627466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.627567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.627594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.627692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.627718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.627839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.627864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.627959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.627984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.628123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.628150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.628253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.628279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.628418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.628444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.628540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.628566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.628671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.628697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.628794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.628819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.628967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.628992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.629100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.629138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.629242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.629268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.629394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.629431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.629534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.629559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.629667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.629693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.629810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.629836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.629972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.629997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.630116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.630142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.630250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.630275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.630409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.630435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.630540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.630567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.630669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.630694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.113 [2024-07-25 23:39:05.630811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.113 [2024-07-25 23:39:05.630837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.113 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.630939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.630966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.631070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.631095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.631228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.631253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.631352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.631377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.631476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.631501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.631629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.631655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.631788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.631814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.631920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.631946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.632051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.632095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.632203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.632228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.632360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.632386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.632496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.632522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.632625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.632650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.632779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.632805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.632902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.632940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.633046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.633086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.633186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.633212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.633313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.633338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.633442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.633468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.633560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.633586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.633682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.633707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.633817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.633843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.633950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.633979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.634112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.634139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.634255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.634285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.634384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.634411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.634518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.634543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.634645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.634671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.634771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.634798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.634916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.634956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.635104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.635132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.635238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.635264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.635371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.114 [2024-07-25 23:39:05.635401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.114 qpair failed and we were unable to recover it. 00:33:08.114 [2024-07-25 23:39:05.635506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.635533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.635658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.635683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.635797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.635824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b9 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:08.115 0 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.635935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.635962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.636084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.636111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.115 [2024-07-25 23:39:05.636217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.636244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.115 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.636343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.636369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.636514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.636540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.636644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.636679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.636813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.636839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.636952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.636977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.637085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.637111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.637218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.637243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.637353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.637380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.637483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.637508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.637639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.637665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.637779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.637812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.637941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.637967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.638080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.638106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.638210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.638235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.638330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.638356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.638457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.638482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.638583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.638609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.638710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.638735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.638910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.638936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.639077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.639103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.639210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.639235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.639341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.639368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.639487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.639513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.639620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.639646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.639748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.639774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.639899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.639925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.640022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.640047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.640160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.640185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.640298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.640325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.640457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.640483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.640589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.640615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.640718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.115 [2024-07-25 23:39:05.640744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.115 qpair failed and we were unable to recover it. 00:33:08.115 [2024-07-25 23:39:05.640870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.640896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.640998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.641024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.641133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.641159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.641288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.641314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.641423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.641449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.641568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.641607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.641780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.641808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.641914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.641941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.642078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.642106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.642217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.642244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.642378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.642404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.642534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.642560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.642667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.642693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.642802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.642828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.643001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.643041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.643174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.643201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.643349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.643382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.643528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.643553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.643683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.643708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.643826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.643851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.643994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.644019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.644146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.644172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.644276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.644301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.644413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.644439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.644540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.644565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.644706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.644731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.644828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.644853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.644967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.644992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.645105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.645131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.645231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.645257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.645357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.645382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.645489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.645514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.645618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.645643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.645751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.645777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.645888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.645913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.646028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.646068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.646225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.646250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.646390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.646415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.646520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.646545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.116 [2024-07-25 23:39:05.646653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.116 [2024-07-25 23:39:05.646679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.116 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.646817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.646842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.646938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.646964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.647065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.647092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.647191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.647217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.647323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.647349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.647482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.647507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.647610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.647636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.647766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.647792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.647923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.647949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.648045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.648076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.648180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.648206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.648315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.648341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.648462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.648488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.648598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.648623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.648721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.648747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.648851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.648877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.649011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.649036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.649170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.649210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.649368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.649407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.649534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.649566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.649682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.649709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.649826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.649852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.649956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.649982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.650123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.650150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.650266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.650292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.650411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.650438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.650542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.650568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.650669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.650694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.650805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.650831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.650959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.650984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.651114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.651152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.651264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.651290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.651398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.651424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.651552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.651578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.651682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.651708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.651807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.651833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.651948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.651973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.652103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.652130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc1c000b90 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.652320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.117 [2024-07-25 23:39:05.652347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.117 qpair failed and we were unable to recover it. 00:33:08.117 [2024-07-25 23:39:05.652480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.652505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.652614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.652640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.652754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.652780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.652886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.652911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.653012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.653037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.653154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.653179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.653291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.653316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.653426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.653456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.653588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.653613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.653720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.653745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.653880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.653905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.654008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.654033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.654140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.654166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.654265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.654290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.654387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.654412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.654520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.654546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.654675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.654700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.654810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.654835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.654973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.654999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.655111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.655136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.655274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.655299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.655409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.655434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.655573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.655599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.655704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.655729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.655834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.655859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.655954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.655979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.656116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.656142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.656244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.656269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.656384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.656409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.656517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.656544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.656658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.656685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.656798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.656823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.656954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.656979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.657086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.118 [2024-07-25 23:39:05.657112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.118 qpair failed and we were unable to recover it. 00:33:08.118 [2024-07-25 23:39:05.657241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.657271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-25 23:39:05.657377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.657402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-25 23:39:05.657538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.657563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-25 23:39:05.657667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.657692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-25 23:39:05.657785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.657810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-25 23:39:05.657911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.657937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-25 23:39:05.658063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.658089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 Malloc0 00:33:08.119 [2024-07-25 23:39:05.658192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.658217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-25 23:39:05.658333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.658358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-25 23:39:05.658470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.658496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.119 addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-25 23:39:05.658611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.658636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:08.119 [2024-07-25 23:39:05.658764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.658789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa44b0 with addr=10.0.0.2, port=4420 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:08.119 [2024-07-25 23:39:05.658904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.658954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-25 23:39:05.659082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.659111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-25 23:39:05.659226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.659252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-25 23:39:05.659386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.659422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-25 23:39:05.659524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.659550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-25 23:39:05.659658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.659684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-25 23:39:05.659786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.659812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdc14000b90 with addr=10.0.0.2, port=4420 00:33:08.119 qpair failed and we were unable to recover it. 00:33:08.119 [2024-07-25 23:39:05.659991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.119 [2024-07-25 23:39:05.660038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb2470 with addr=10.0.0.2, port=4420 00:33:08.119 [2024-07-25 23:39:05.660084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb2470 is same with the state(5) to be set 00:33:08.119 [2024-07-25 23:39:05.660114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb2470 (9): Bad file descriptor 00:33:08.119 [2024-07-25 23:39:05.660135] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.119 [2024-07-25 23:39:05.660150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.119 [2024-07-25 23:39:05.660167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.119 Unable to reset the controller. 00:33:08.119 [2024-07-25 23:39:05.661911] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:08.119 [2024-07-25 23:39:05.690115] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.119 23:39:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1536703 00:33:09.051 Controller properly reset. 00:33:14.307 Initializing NVMe Controllers 00:33:14.307 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:14.307 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:14.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:14.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:14.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:14.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:14.307 Initialization complete. Launching workers. 00:33:14.307 Starting thread on core 1 00:33:14.307 Starting thread on core 2 00:33:14.307 Starting thread on core 3 00:33:14.307 Starting thread on core 0 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:14.307 00:33:14.307 real 0m10.765s 00:33:14.307 user 0m32.879s 00:33:14.307 sys 0m7.875s 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:14.307 ************************************ 00:33:14.307 END TEST nvmf_target_disconnect_tc2 00:33:14.307 ************************************ 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:14.307 rmmod nvme_tcp 00:33:14.307 rmmod nvme_fabrics 00:33:14.307 rmmod nvme_keyring 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1537258 ']' 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1537258 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1537258 ']' 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1537258 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1537258 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1537258' 00:33:14.307 killing process with pid 1537258 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1537258 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1537258 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:14.307 23:39:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.842 23:39:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:16.842 00:33:16.842 real 0m15.488s 00:33:16.842 user 0m58.546s 00:33:16.842 sys 0m10.250s 00:33:16.842 23:39:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:16.842 23:39:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:16.842 ************************************ 00:33:16.842 END TEST nvmf_target_disconnect 00:33:16.842 ************************************ 00:33:16.842 23:39:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:16.842 00:33:16.842 real 6m31.929s 00:33:16.842 user 17m3.240s 00:33:16.842 sys 1m27.667s 00:33:16.842 23:39:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:16.842 23:39:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.842 ************************************ 00:33:16.842 END TEST nvmf_host 00:33:16.842 ************************************ 00:33:16.842 00:33:16.842 real 27m8.856s 00:33:16.842 user 74m16.261s 00:33:16.842 sys 6m22.993s 00:33:16.842 23:39:14 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:16.842 23:39:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:16.842 ************************************ 00:33:16.842 END TEST nvmf_tcp 00:33:16.842 ************************************ 00:33:16.842 23:39:14 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:33:16.843 23:39:14 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:16.843 23:39:14 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:16.843 23:39:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:16.843 23:39:14 -- common/autotest_common.sh@10 -- # set +x 00:33:16.843 ************************************ 00:33:16.843 START TEST spdkcli_nvmf_tcp 00:33:16.843 ************************************ 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:16.843 * Looking for test storage... 00:33:16.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1538848 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1538848 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1538848 ']' 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:16.843 [2024-07-25 23:39:14.166142] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:16.843 [2024-07-25 23:39:14.166236] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1538848 ] 00:33:16.843 EAL: No free 2048 kB hugepages reported on node 1 00:33:16.843 [2024-07-25 23:39:14.199766] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:16.843 [2024-07-25 23:39:14.228455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:16.843 [2024-07-25 23:39:14.322080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.843 [2024-07-25 23:39:14.322091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:16.843 23:39:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:16.843 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:16.843 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:16.843 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:16.843 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:16.843 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:16.843 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:16.843 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:16.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:16.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:16.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:16.844 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:16.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:16.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:16.844 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:16.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:16.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:16.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:16.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:16.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:16.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:16.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:16.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:16.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:16.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:16.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:16.844 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:16.844 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:16.844 ' 00:33:19.370 [2024-07-25 23:39:16.966391] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.742 [2024-07-25 23:39:18.186792] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:23.267 [2024-07-25 23:39:20.466043] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:25.162 [2024-07-25 23:39:22.408159] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:26.531 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:26.531 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:26.531 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:26.531 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:26.531 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:26.531 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:26.531 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:26.531 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:26.531 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:26.531 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:26.531 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:26.531 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:26.531 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:26.531 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:26.531 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:26.531 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:26.531 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:26.531 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:26.531 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:26.531 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:26.531 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:26.531 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:26.531 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:26.531 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:26.531 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:26.531 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:26.531 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:26.531 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:26.531 23:39:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:26.531 23:39:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:26.531 23:39:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:26.531 23:39:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:26.531 23:39:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:26.531 23:39:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:26.531 23:39:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:26.531 23:39:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:26.788 23:39:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:26.788 23:39:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:26.788 23:39:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:26.788 23:39:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:26.788 23:39:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:27.045 23:39:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:27.045 23:39:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:27.045 23:39:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:27.045 23:39:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:27.045 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:27.045 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:27.045 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:27.045 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:27.045 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:27.045 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:27.045 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:27.045 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:27.045 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:27.045 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:27.045 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:27.045 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:27.045 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:27.045 ' 00:33:32.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:32.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:32.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:32.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:32.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:32.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:32.305 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:32.305 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:32.305 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:32.305 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:32.305 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:32.305 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:32.305 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:32.305 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:32.305 23:39:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:32.305 23:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:32.305 23:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.305 23:39:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1538848 00:33:32.305 23:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1538848 ']' 00:33:32.305 23:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1538848 00:33:32.305 23:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:33:32.305 23:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:32.305 23:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1538848 00:33:32.305 23:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:32.305 23:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:32.305 23:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1538848' 00:33:32.305 killing process with pid 1538848 00:33:32.305 23:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1538848 00:33:32.305 23:39:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1538848 00:33:32.564 23:39:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:32.564 23:39:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:32.564 23:39:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1538848 ']' 00:33:32.564 23:39:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1538848 00:33:32.564 23:39:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1538848 ']' 00:33:32.564 23:39:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1538848 00:33:32.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1538848) - No such process 00:33:32.564 23:39:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1538848 is not found' 00:33:32.564 Process with pid 1538848 is not found 00:33:32.564 23:39:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:32.564 23:39:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:32.564 23:39:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:32.564 00:33:32.564 real 0m15.972s 00:33:32.564 user 0m33.803s 00:33:32.564 sys 0m0.792s 00:33:32.564 23:39:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:32.564 23:39:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.564 ************************************ 00:33:32.564 END TEST spdkcli_nvmf_tcp 00:33:32.564 ************************************ 00:33:32.564 23:39:30 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:32.564 23:39:30 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:32.564 23:39:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:32.564 23:39:30 -- common/autotest_common.sh@10 -- # set +x 00:33:32.564 ************************************ 00:33:32.564 START TEST nvmf_identify_passthru 00:33:32.564 ************************************ 00:33:32.564 23:39:30 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:32.564 * Looking for test storage... 00:33:32.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:32.564 23:39:30 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:32.564 23:39:30 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:32.564 23:39:30 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:32.564 23:39:30 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:32.564 23:39:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.564 23:39:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.564 23:39:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.564 23:39:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:32.564 23:39:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:33:32.564 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:32.565 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:32.565 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:32.565 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:32.565 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:32.565 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:32.565 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:32.565 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:32.565 23:39:30 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:32.565 23:39:30 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:32.565 23:39:30 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:32.565 23:39:30 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:32.565 23:39:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.565 23:39:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.565 23:39:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.565 23:39:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:32.565 23:39:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.565 23:39:30 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:32.565 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:32.565 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:32.565 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:32.565 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:32.565 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:32.565 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.565 23:39:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:32.565 23:39:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.565 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:32.565 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:32.565 23:39:30 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:33:32.565 23:39:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:34.467 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:34.467 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:34.467 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:34.467 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:34.467 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:34.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:34.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:33:34.467 00:33:34.467 --- 10.0.0.2 ping statistics --- 00:33:34.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.468 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:33:34.468 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:34.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:34.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:33:34.468 00:33:34.468 --- 10.0.0.1 ping statistics --- 00:33:34.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.468 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:33:34.468 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:34.468 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:33:34.468 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:34.468 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:34.468 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:34.468 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:34.468 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:34.468 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:34.468 23:39:32 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:34.726 23:39:32 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:34.726 23:39:32 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:34.726 23:39:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:34.726 23:39:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:34.726 23:39:32 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:33:34.726 23:39:32 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:33:34.726 23:39:32 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:33:34.726 23:39:32 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:33:34.726 23:39:32 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:33:34.726 23:39:32 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:33:34.726 23:39:32 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:34.726 23:39:32 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:34.726 23:39:32 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:33:34.726 23:39:32 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:33:34.726 23:39:32 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:33:34.726 23:39:32 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:33:34.726 23:39:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:33:34.726 23:39:32 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:33:34.726 23:39:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:33:34.726 23:39:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:34.726 23:39:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:34.726 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.908 23:39:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:33:38.908 23:39:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:33:38.908 23:39:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:38.908 23:39:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:38.908 EAL: No free 2048 kB hugepages reported on node 1 00:33:43.094 23:39:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:43.094 23:39:40 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:43.094 23:39:40 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:43.094 23:39:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:43.094 23:39:40 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:43.094 23:39:40 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:43.094 23:39:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:43.094 23:39:40 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1543455 00:33:43.094 23:39:40 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:43.094 23:39:40 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:43.094 23:39:40 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1543455 00:33:43.094 23:39:40 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1543455 ']' 00:33:43.094 23:39:40 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.094 23:39:40 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:43.094 23:39:40 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.094 23:39:40 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:43.094 23:39:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:43.353 [2024-07-25 23:39:40.847976] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:43.353 [2024-07-25 23:39:40.848092] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:43.353 EAL: No free 2048 kB hugepages reported on node 1 00:33:43.353 [2024-07-25 23:39:40.888609] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:43.353 [2024-07-25 23:39:40.915651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:43.353 [2024-07-25 23:39:41.005139] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:43.353 [2024-07-25 23:39:41.005203] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:43.353 [2024-07-25 23:39:41.005232] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:43.353 [2024-07-25 23:39:41.005244] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:43.353 [2024-07-25 23:39:41.005254] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:43.353 [2024-07-25 23:39:41.005314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.353 [2024-07-25 23:39:41.005376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:43.353 [2024-07-25 23:39:41.005423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:43.353 [2024-07-25 23:39:41.005425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.353 23:39:41 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:43.353 23:39:41 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:33:43.353 23:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:43.353 23:39:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.353 23:39:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:43.353 INFO: Log level set to 20 00:33:43.353 INFO: Requests: 00:33:43.353 { 00:33:43.353 "jsonrpc": "2.0", 00:33:43.353 "method": "nvmf_set_config", 00:33:43.353 "id": 1, 00:33:43.353 "params": { 00:33:43.353 "admin_cmd_passthru": { 00:33:43.353 "identify_ctrlr": true 00:33:43.353 } 00:33:43.353 } 00:33:43.353 } 00:33:43.353 00:33:43.353 INFO: response: 00:33:43.353 { 00:33:43.353 "jsonrpc": "2.0", 00:33:43.353 "id": 1, 00:33:43.353 "result": true 00:33:43.353 } 00:33:43.353 00:33:43.353 23:39:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.353 23:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:43.353 23:39:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.353 23:39:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:43.353 INFO: Setting log level to 20 00:33:43.353 INFO: Setting log level to 20 00:33:43.353 INFO: Log level set to 20 00:33:43.353 INFO: Log level set to 20 00:33:43.353 INFO: Requests: 00:33:43.353 { 00:33:43.353 "jsonrpc": "2.0", 00:33:43.353 "method": "framework_start_init", 00:33:43.353 "id": 1 00:33:43.353 } 00:33:43.353 00:33:43.353 INFO: Requests: 00:33:43.353 { 00:33:43.353 "jsonrpc": "2.0", 00:33:43.353 "method": "framework_start_init", 00:33:43.353 "id": 1 00:33:43.353 } 00:33:43.353 00:33:43.612 [2024-07-25 23:39:41.175452] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:43.612 INFO: response: 00:33:43.612 { 00:33:43.612 "jsonrpc": "2.0", 00:33:43.612 "id": 1, 00:33:43.612 "result": true 00:33:43.612 } 00:33:43.612 00:33:43.612 INFO: response: 00:33:43.612 { 00:33:43.612 "jsonrpc": "2.0", 00:33:43.612 "id": 1, 00:33:43.612 "result": true 00:33:43.612 } 00:33:43.612 00:33:43.612 23:39:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.612 23:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:43.612 23:39:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.612 23:39:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:43.612 INFO: Setting log level to 40 00:33:43.612 INFO: Setting log level to 40 00:33:43.612 INFO: Setting log level to 40 00:33:43.612 [2024-07-25 23:39:41.185573] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:43.612 23:39:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.612 23:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:43.612 23:39:41 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:43.612 23:39:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:43.612 23:39:41 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:33:43.612 23:39:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.612 23:39:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:46.890 Nvme0n1 00:33:46.890 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.890 23:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:46.890 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.890 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:46.890 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.890 23:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:46.890 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.890 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:46.890 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.890 23:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:46.890 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.890 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:46.890 [2024-07-25 23:39:44.080115] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:46.890 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.890 23:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:46.890 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.890 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:46.890 [ 00:33:46.890 { 00:33:46.890 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:46.890 "subtype": "Discovery", 00:33:46.890 "listen_addresses": [], 00:33:46.890 "allow_any_host": true, 00:33:46.890 "hosts": [] 00:33:46.890 }, 00:33:46.890 { 00:33:46.890 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:46.890 "subtype": "NVMe", 00:33:46.890 "listen_addresses": [ 00:33:46.890 { 00:33:46.890 "trtype": "TCP", 00:33:46.890 "adrfam": "IPv4", 00:33:46.890 "traddr": "10.0.0.2", 00:33:46.890 "trsvcid": "4420" 00:33:46.890 } 00:33:46.890 ], 00:33:46.890 "allow_any_host": true, 00:33:46.890 "hosts": [], 00:33:46.890 "serial_number": "SPDK00000000000001", 00:33:46.890 "model_number": "SPDK bdev Controller", 00:33:46.890 "max_namespaces": 1, 00:33:46.890 "min_cntlid": 1, 00:33:46.890 "max_cntlid": 65519, 00:33:46.890 "namespaces": [ 00:33:46.890 { 00:33:46.890 "nsid": 1, 00:33:46.890 "bdev_name": "Nvme0n1", 00:33:46.890 "name": "Nvme0n1", 00:33:46.890 "nguid": "C3C0DF20A67747B7937BA987B84509AD", 00:33:46.890 "uuid": "c3c0df20-a677-47b7-937b-a987b84509ad" 00:33:46.890 } 00:33:46.890 ] 00:33:46.890 } 00:33:46.890 ] 00:33:46.890 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.890 23:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:46.890 23:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:46.890 23:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:46.890 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.890 23:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:33:46.890 23:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:46.890 23:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:46.890 23:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:46.890 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.890 23:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:46.890 23:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:33:46.890 23:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:46.890 23:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:46.890 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.890 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:46.890 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.891 23:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:46.891 23:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:46.891 23:39:44 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:46.891 23:39:44 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:33:46.891 23:39:44 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:46.891 23:39:44 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:33:46.891 23:39:44 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:46.891 23:39:44 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:46.891 rmmod nvme_tcp 00:33:46.891 rmmod nvme_fabrics 00:33:46.891 rmmod nvme_keyring 00:33:46.891 23:39:44 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:46.891 23:39:44 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:33:46.891 23:39:44 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:33:46.891 23:39:44 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1543455 ']' 00:33:46.891 23:39:44 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1543455 00:33:46.891 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1543455 ']' 00:33:46.891 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1543455 00:33:46.891 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:33:46.891 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:46.891 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1543455 00:33:46.891 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:46.891 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:46.891 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1543455' 00:33:46.891 killing process with pid 1543455 00:33:46.891 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1543455 00:33:46.891 23:39:44 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1543455 00:33:48.785 23:39:46 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:48.785 23:39:46 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:48.785 23:39:46 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:48.785 23:39:46 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:48.785 23:39:46 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:48.785 23:39:46 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.785 23:39:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:48.785 23:39:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.726 23:39:48 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:50.726 00:33:50.726 real 0m17.972s 00:33:50.726 user 0m26.475s 00:33:50.726 sys 0m2.273s 00:33:50.726 23:39:48 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:50.726 23:39:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:50.726 ************************************ 00:33:50.726 END TEST nvmf_identify_passthru 00:33:50.726 ************************************ 00:33:50.726 23:39:48 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:50.726 23:39:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:50.726 23:39:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:50.726 23:39:48 -- common/autotest_common.sh@10 -- # set +x 00:33:50.726 ************************************ 00:33:50.726 START TEST nvmf_dif 00:33:50.726 ************************************ 00:33:50.726 23:39:48 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:50.726 * Looking for test storage... 00:33:50.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:50.726 23:39:48 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:50.726 23:39:48 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:50.727 23:39:48 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.727 23:39:48 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.727 23:39:48 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.727 23:39:48 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.727 23:39:48 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.727 23:39:48 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.727 23:39:48 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:50.727 23:39:48 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:50.727 23:39:48 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:50.727 23:39:48 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:50.727 23:39:48 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:50.727 23:39:48 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:50.727 23:39:48 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.727 23:39:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:50.727 23:39:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:50.727 23:39:48 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:33:50.727 23:39:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:52.647 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:52.647 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:52.647 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:52.647 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:52.647 23:39:50 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:52.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:52.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:33:52.648 00:33:52.648 --- 10.0.0.2 ping statistics --- 00:33:52.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.648 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:52.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:52.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:33:52.648 00:33:52.648 --- 10.0.0.1 ping statistics --- 00:33:52.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.648 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:33:52.648 23:39:50 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:53.581 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:53.581 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:53.581 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:53.581 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:53.581 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:53.581 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:53.581 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:53.582 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:53.582 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:53.582 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:53.582 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:53.582 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:53.582 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:53.582 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:53.582 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:53.582 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:53.582 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:53.840 23:39:51 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:53.840 23:39:51 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:53.840 23:39:51 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:53.840 23:39:51 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:53.840 23:39:51 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:53.840 23:39:51 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:53.840 23:39:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:53.840 23:39:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:53.840 23:39:51 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:53.840 23:39:51 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:53.840 23:39:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:53.840 23:39:51 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1546599 00:33:53.840 23:39:51 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:53.840 23:39:51 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1546599 00:33:53.840 23:39:51 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1546599 ']' 00:33:53.840 23:39:51 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.840 23:39:51 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:53.840 23:39:51 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.840 23:39:51 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:53.840 23:39:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:53.840 [2024-07-25 23:39:51.470635] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:33:53.840 [2024-07-25 23:39:51.470706] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:53.840 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.840 [2024-07-25 23:39:51.507018] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:53.840 [2024-07-25 23:39:51.533835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.099 [2024-07-25 23:39:51.618573] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.099 [2024-07-25 23:39:51.618651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.099 [2024-07-25 23:39:51.618680] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.099 [2024-07-25 23:39:51.618691] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.099 [2024-07-25 23:39:51.618700] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.099 [2024-07-25 23:39:51.618735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.099 23:39:51 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:54.099 23:39:51 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:33:54.099 23:39:51 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:54.099 23:39:51 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:54.099 23:39:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:54.099 23:39:51 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:54.099 23:39:51 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:54.099 23:39:51 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:54.099 23:39:51 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.099 23:39:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:54.099 [2024-07-25 23:39:51.762928] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.099 23:39:51 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.099 23:39:51 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:54.099 23:39:51 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:54.099 23:39:51 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:54.099 23:39:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:54.099 ************************************ 00:33:54.099 START TEST fio_dif_1_default 00:33:54.099 ************************************ 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:54.099 bdev_null0 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.099 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:54.099 [2024-07-25 23:39:51.823257] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:54.358 { 00:33:54.358 "params": { 00:33:54.358 "name": "Nvme$subsystem", 00:33:54.358 "trtype": "$TEST_TRANSPORT", 00:33:54.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.358 "adrfam": "ipv4", 00:33:54.358 "trsvcid": "$NVMF_PORT", 00:33:54.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.358 "hdgst": ${hdgst:-false}, 00:33:54.358 "ddgst": ${ddgst:-false} 00:33:54.358 }, 00:33:54.358 "method": "bdev_nvme_attach_controller" 00:33:54.358 } 00:33:54.358 EOF 00:33:54.358 )") 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:33:54.358 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:54.359 23:39:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:33:54.359 23:39:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:33:54.359 23:39:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:54.359 "params": { 00:33:54.359 "name": "Nvme0", 00:33:54.359 "trtype": "tcp", 00:33:54.359 "traddr": "10.0.0.2", 00:33:54.359 "adrfam": "ipv4", 00:33:54.359 "trsvcid": "4420", 00:33:54.359 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:54.359 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:54.359 "hdgst": false, 00:33:54.359 "ddgst": false 00:33:54.359 }, 00:33:54.359 "method": "bdev_nvme_attach_controller" 00:33:54.359 }' 00:33:54.359 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:54.359 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:54.359 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:54.359 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:54.359 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:54.359 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:54.359 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:54.359 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:54.359 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:54.359 23:39:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:54.359 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:54.359 fio-3.35 00:33:54.359 Starting 1 thread 00:33:54.617 EAL: No free 2048 kB hugepages reported on node 1 00:34:06.810 00:34:06.810 filename0: (groupid=0, jobs=1): err= 0: pid=1546825: Thu Jul 25 23:40:02 2024 00:34:06.810 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:34:06.810 slat (nsec): min=4494, max=50896, avg=9360.02, stdev=2954.14 00:34:06.810 clat (usec): min=40877, max=46000, avg=40999.37, stdev=330.01 00:34:06.810 lat (usec): min=40885, max=46014, avg=41008.73, stdev=330.06 00:34:06.810 clat percentiles (usec): 00:34:06.810 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:06.810 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:06.810 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:06.810 | 99.00th=[41681], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:34:06.810 | 99.99th=[45876] 00:34:06.810 bw ( KiB/s): min= 384, max= 416, per=99.50%, avg=388.80, stdev=11.72, samples=20 00:34:06.810 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:06.810 lat (msec) : 50=100.00% 00:34:06.810 cpu : usr=89.46%, sys=10.27%, ctx=17, majf=0, minf=249 00:34:06.810 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:06.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.810 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.810 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:06.810 00:34:06.810 Run status group 0 (all jobs): 00:34:06.810 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10012-10012msec 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.810 00:34:06.810 real 0m11.148s 00:34:06.810 user 0m10.062s 00:34:06.810 sys 0m1.284s 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:06.810 ************************************ 00:34:06.810 END TEST fio_dif_1_default 00:34:06.810 ************************************ 00:34:06.810 23:40:02 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:06.810 23:40:02 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:06.810 23:40:02 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:06.810 23:40:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:06.810 ************************************ 00:34:06.810 START TEST fio_dif_1_multi_subsystems 00:34:06.810 ************************************ 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.810 23:40:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:06.810 bdev_null0 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:06.810 [2024-07-25 23:40:03.023930] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:06.810 bdev_null1 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:06.810 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:06.811 { 00:34:06.811 "params": { 00:34:06.811 "name": "Nvme$subsystem", 00:34:06.811 "trtype": "$TEST_TRANSPORT", 00:34:06.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:06.811 "adrfam": "ipv4", 00:34:06.811 "trsvcid": "$NVMF_PORT", 00:34:06.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:06.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:06.811 "hdgst": ${hdgst:-false}, 00:34:06.811 "ddgst": ${ddgst:-false} 00:34:06.811 }, 00:34:06.811 "method": "bdev_nvme_attach_controller" 00:34:06.811 } 00:34:06.811 EOF 00:34:06.811 )") 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:06.811 { 00:34:06.811 "params": { 00:34:06.811 "name": "Nvme$subsystem", 00:34:06.811 "trtype": "$TEST_TRANSPORT", 00:34:06.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:06.811 "adrfam": "ipv4", 00:34:06.811 "trsvcid": "$NVMF_PORT", 00:34:06.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:06.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:06.811 "hdgst": ${hdgst:-false}, 00:34:06.811 "ddgst": ${ddgst:-false} 00:34:06.811 }, 00:34:06.811 "method": "bdev_nvme_attach_controller" 00:34:06.811 } 00:34:06.811 EOF 00:34:06.811 )") 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:06.811 "params": { 00:34:06.811 "name": "Nvme0", 00:34:06.811 "trtype": "tcp", 00:34:06.811 "traddr": "10.0.0.2", 00:34:06.811 "adrfam": "ipv4", 00:34:06.811 "trsvcid": "4420", 00:34:06.811 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:06.811 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:06.811 "hdgst": false, 00:34:06.811 "ddgst": false 00:34:06.811 }, 00:34:06.811 "method": "bdev_nvme_attach_controller" 00:34:06.811 },{ 00:34:06.811 "params": { 00:34:06.811 "name": "Nvme1", 00:34:06.811 "trtype": "tcp", 00:34:06.811 "traddr": "10.0.0.2", 00:34:06.811 "adrfam": "ipv4", 00:34:06.811 "trsvcid": "4420", 00:34:06.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:06.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:06.811 "hdgst": false, 00:34:06.811 "ddgst": false 00:34:06.811 }, 00:34:06.811 "method": "bdev_nvme_attach_controller" 00:34:06.811 }' 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:06.811 23:40:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:06.811 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:06.811 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:06.811 fio-3.35 00:34:06.811 Starting 2 threads 00:34:06.811 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.772 00:34:16.772 filename0: (groupid=0, jobs=1): err= 0: pid=1548227: Thu Jul 25 23:40:14 2024 00:34:16.772 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:34:16.772 slat (usec): min=4, max=123, avg= 9.91, stdev= 4.85 00:34:16.772 clat (usec): min=40876, max=46448, avg=40999.38, stdev=358.47 00:34:16.772 lat (usec): min=40885, max=46462, avg=41009.29, stdev=358.65 00:34:16.772 clat percentiles (usec): 00:34:16.772 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:16.772 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:16.772 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:16.772 | 99.00th=[41681], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:34:16.772 | 99.99th=[46400] 00:34:16.772 bw ( KiB/s): min= 384, max= 416, per=49.76%, avg=388.80, stdev=11.72, samples=20 00:34:16.772 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:16.772 lat (msec) : 50=100.00% 00:34:16.772 cpu : usr=93.82%, sys=5.77%, ctx=37, majf=0, minf=251 00:34:16.772 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.772 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.772 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:16.772 filename1: (groupid=0, jobs=1): err= 0: pid=1548228: Thu Jul 25 23:40:14 2024 00:34:16.772 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10013msec) 00:34:16.772 slat (nsec): min=4967, max=28017, avg=9548.48, stdev=2577.26 00:34:16.772 clat (usec): min=40859, max=46518, avg=41005.09, stdev=369.97 00:34:16.772 lat (usec): min=40867, max=46545, avg=41014.64, stdev=370.42 00:34:16.772 clat percentiles (usec): 00:34:16.772 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:16.772 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:16.772 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:16.772 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:34:16.772 | 99.99th=[46400] 00:34:16.772 bw ( KiB/s): min= 384, max= 416, per=49.76%, avg=388.80, stdev=11.72, samples=20 00:34:16.772 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:16.772 lat (msec) : 50=100.00% 00:34:16.772 cpu : usr=93.98%, sys=5.73%, ctx=14, majf=0, minf=93 00:34:16.772 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.772 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.772 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:16.772 00:34:16.772 Run status group 0 (all jobs): 00:34:16.772 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10012-10013msec 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.772 00:34:16.772 real 0m11.461s 00:34:16.772 user 0m20.216s 00:34:16.772 sys 0m1.494s 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:16.772 23:40:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:16.772 ************************************ 00:34:16.772 END TEST fio_dif_1_multi_subsystems 00:34:16.772 ************************************ 00:34:16.772 23:40:14 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:16.772 23:40:14 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:16.772 23:40:14 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:16.772 23:40:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:17.029 ************************************ 00:34:17.029 START TEST fio_dif_rand_params 00:34:17.029 ************************************ 00:34:17.029 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.030 bdev_null0 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.030 [2024-07-25 23:40:14.542022] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:17.030 { 00:34:17.030 "params": { 00:34:17.030 "name": "Nvme$subsystem", 00:34:17.030 "trtype": "$TEST_TRANSPORT", 00:34:17.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.030 "adrfam": "ipv4", 00:34:17.030 "trsvcid": "$NVMF_PORT", 00:34:17.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.030 "hdgst": ${hdgst:-false}, 00:34:17.030 "ddgst": ${ddgst:-false} 00:34:17.030 }, 00:34:17.030 "method": "bdev_nvme_attach_controller" 00:34:17.030 } 00:34:17.030 EOF 00:34:17.030 )") 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:17.030 "params": { 00:34:17.030 "name": "Nvme0", 00:34:17.030 "trtype": "tcp", 00:34:17.030 "traddr": "10.0.0.2", 00:34:17.030 "adrfam": "ipv4", 00:34:17.030 "trsvcid": "4420", 00:34:17.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:17.030 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:17.030 "hdgst": false, 00:34:17.030 "ddgst": false 00:34:17.030 }, 00:34:17.030 "method": "bdev_nvme_attach_controller" 00:34:17.030 }' 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:17.030 23:40:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:17.287 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:17.287 ... 00:34:17.287 fio-3.35 00:34:17.287 Starting 3 threads 00:34:17.287 EAL: No free 2048 kB hugepages reported on node 1 00:34:23.853 00:34:23.853 filename0: (groupid=0, jobs=1): err= 0: pid=1549629: Thu Jul 25 23:40:20 2024 00:34:23.853 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(131MiB/5005msec) 00:34:23.853 slat (nsec): min=4506, max=96254, avg=14934.80, stdev=5949.16 00:34:23.853 clat (usec): min=4842, max=95255, avg=14277.68, stdev=11344.16 00:34:23.853 lat (usec): min=4854, max=95287, avg=14292.61, stdev=11343.96 00:34:23.853 clat percentiles (usec): 00:34:23.853 | 1.00th=[ 5276], 5.00th=[ 5735], 10.00th=[ 6980], 20.00th=[ 8717], 00:34:23.853 | 30.00th=[ 9896], 40.00th=[10945], 50.00th=[11863], 60.00th=[12649], 00:34:23.853 | 70.00th=[13304], 80.00th=[14484], 90.00th=[16581], 95.00th=[48497], 00:34:23.853 | 99.00th=[53740], 99.50th=[57934], 99.90th=[94897], 99.95th=[94897], 00:34:23.853 | 99.99th=[94897] 00:34:23.853 bw ( KiB/s): min=16896, max=33536, per=34.56%, avg=26803.20, stdev=4848.33, samples=10 00:34:23.853 iops : min= 132, max= 262, avg=209.40, stdev=37.88, samples=10 00:34:23.853 lat (msec) : 10=31.43%, 20=60.86%, 50=3.81%, 100=3.90% 00:34:23.853 cpu : usr=93.29%, sys=6.29%, ctx=8, majf=0, minf=151 00:34:23.853 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:23.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.854 issued rwts: total=1050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:23.854 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:23.854 filename0: (groupid=0, jobs=1): err= 0: pid=1549630: Thu Jul 25 23:40:20 2024 00:34:23.854 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(133MiB/5005msec) 00:34:23.854 slat (nsec): min=4921, max=63191, avg=23591.75, stdev=8346.56 00:34:23.854 clat (usec): min=5101, max=54872, avg=14078.36, stdev=9483.63 00:34:23.854 lat (usec): min=5113, max=54899, avg=14101.95, stdev=9483.31 00:34:23.854 clat percentiles (usec): 00:34:23.854 | 1.00th=[ 5997], 5.00th=[ 6783], 10.00th=[ 8356], 20.00th=[ 9372], 00:34:23.854 | 30.00th=[10552], 40.00th=[11469], 50.00th=[12256], 60.00th=[13042], 00:34:23.854 | 70.00th=[13698], 80.00th=[14615], 90.00th=[16188], 95.00th=[48497], 00:34:23.854 | 99.00th=[52691], 99.50th=[52691], 99.90th=[54789], 99.95th=[54789], 00:34:23.854 | 99.99th=[54789] 00:34:23.854 bw ( KiB/s): min=23296, max=31744, per=35.06%, avg=27187.20, stdev=2790.80, samples=10 00:34:23.854 iops : min= 182, max= 248, avg=212.40, stdev=21.80, samples=10 00:34:23.854 lat (msec) : 10=26.13%, 20=67.95%, 50=3.01%, 100=2.91% 00:34:23.854 cpu : usr=90.11%, sys=7.81%, ctx=299, majf=0, minf=73 00:34:23.854 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:23.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.854 issued rwts: total=1064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:23.854 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:23.854 filename0: (groupid=0, jobs=1): err= 0: pid=1549631: Thu Jul 25 23:40:20 2024 00:34:23.854 read: IOPS=186, BW=23.4MiB/s (24.5MB/s)(118MiB/5046msec) 00:34:23.854 slat (nsec): min=4466, max=47352, avg=15533.86, stdev=5208.83 00:34:23.854 clat (usec): min=4584, max=90455, avg=15986.72, stdev=12154.07 00:34:23.854 lat (usec): min=4595, max=90469, avg=16002.26, stdev=12153.84 00:34:23.854 clat percentiles (usec): 00:34:23.854 | 1.00th=[ 5866], 5.00th=[ 7439], 10.00th=[ 8717], 20.00th=[ 9634], 00:34:23.854 | 30.00th=[11076], 40.00th=[12125], 50.00th=[12911], 60.00th=[13698], 00:34:23.854 | 70.00th=[14615], 80.00th=[15795], 90.00th=[19006], 95.00th=[51119], 00:34:23.854 | 99.00th=[55313], 99.50th=[57410], 99.90th=[90702], 99.95th=[90702], 00:34:23.854 | 99.99th=[90702] 00:34:23.854 bw ( KiB/s): min=18688, max=28672, per=31.04%, avg=24068.40, stdev=3092.81, samples=10 00:34:23.854 iops : min= 146, max= 224, avg=188.00, stdev=24.18, samples=10 00:34:23.854 lat (msec) : 10=23.01%, 20=67.44%, 50=3.39%, 100=6.15% 00:34:23.854 cpu : usr=93.62%, sys=5.93%, ctx=13, majf=0, minf=99 00:34:23.854 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:23.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.854 issued rwts: total=943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:23.854 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:23.854 00:34:23.854 Run status group 0 (all jobs): 00:34:23.854 READ: bw=75.7MiB/s (79.4MB/s), 23.4MiB/s-26.6MiB/s (24.5MB/s-27.9MB/s), io=382MiB (401MB), run=5005-5046msec 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.854 bdev_null0 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.854 [2024-07-25 23:40:20.765644] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.854 bdev_null1 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.854 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.855 bdev_null2 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:23.855 { 00:34:23.855 "params": { 00:34:23.855 "name": "Nvme$subsystem", 00:34:23.855 "trtype": "$TEST_TRANSPORT", 00:34:23.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:23.855 "adrfam": "ipv4", 00:34:23.855 "trsvcid": "$NVMF_PORT", 00:34:23.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:23.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:23.855 "hdgst": ${hdgst:-false}, 00:34:23.855 "ddgst": ${ddgst:-false} 00:34:23.855 }, 00:34:23.855 "method": "bdev_nvme_attach_controller" 00:34:23.855 } 00:34:23.855 EOF 00:34:23.855 )") 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:23.855 { 00:34:23.855 "params": { 00:34:23.855 "name": "Nvme$subsystem", 00:34:23.855 "trtype": "$TEST_TRANSPORT", 00:34:23.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:23.855 "adrfam": "ipv4", 00:34:23.855 "trsvcid": "$NVMF_PORT", 00:34:23.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:23.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:23.855 "hdgst": ${hdgst:-false}, 00:34:23.855 "ddgst": ${ddgst:-false} 00:34:23.855 }, 00:34:23.855 "method": "bdev_nvme_attach_controller" 00:34:23.855 } 00:34:23.855 EOF 00:34:23.855 )") 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:23.855 { 00:34:23.855 "params": { 00:34:23.855 "name": "Nvme$subsystem", 00:34:23.855 "trtype": "$TEST_TRANSPORT", 00:34:23.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:23.855 "adrfam": "ipv4", 00:34:23.855 "trsvcid": "$NVMF_PORT", 00:34:23.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:23.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:23.855 "hdgst": ${hdgst:-false}, 00:34:23.855 "ddgst": ${ddgst:-false} 00:34:23.855 }, 00:34:23.855 "method": "bdev_nvme_attach_controller" 00:34:23.855 } 00:34:23.855 EOF 00:34:23.855 )") 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:23.855 23:40:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:23.855 "params": { 00:34:23.855 "name": "Nvme0", 00:34:23.855 "trtype": "tcp", 00:34:23.855 "traddr": "10.0.0.2", 00:34:23.855 "adrfam": "ipv4", 00:34:23.855 "trsvcid": "4420", 00:34:23.855 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:23.855 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:23.855 "hdgst": false, 00:34:23.855 "ddgst": false 00:34:23.855 }, 00:34:23.855 "method": "bdev_nvme_attach_controller" 00:34:23.855 },{ 00:34:23.855 "params": { 00:34:23.855 "name": "Nvme1", 00:34:23.855 "trtype": "tcp", 00:34:23.855 "traddr": "10.0.0.2", 00:34:23.855 "adrfam": "ipv4", 00:34:23.855 "trsvcid": "4420", 00:34:23.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:23.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:23.856 "hdgst": false, 00:34:23.856 "ddgst": false 00:34:23.856 }, 00:34:23.856 "method": "bdev_nvme_attach_controller" 00:34:23.856 },{ 00:34:23.856 "params": { 00:34:23.856 "name": "Nvme2", 00:34:23.856 "trtype": "tcp", 00:34:23.856 "traddr": "10.0.0.2", 00:34:23.856 "adrfam": "ipv4", 00:34:23.856 "trsvcid": "4420", 00:34:23.856 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:23.856 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:23.856 "hdgst": false, 00:34:23.856 "ddgst": false 00:34:23.856 }, 00:34:23.856 "method": "bdev_nvme_attach_controller" 00:34:23.856 }' 00:34:23.856 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:23.856 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:23.856 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:23.856 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:23.856 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:23.856 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:23.856 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:23.856 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:23.856 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:23.856 23:40:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.856 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:23.856 ... 00:34:23.856 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:23.856 ... 00:34:23.856 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:23.856 ... 00:34:23.856 fio-3.35 00:34:23.856 Starting 24 threads 00:34:23.856 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.094 00:34:36.094 filename0: (groupid=0, jobs=1): err= 0: pid=1550489: Thu Jul 25 23:40:32 2024 00:34:36.094 read: IOPS=359, BW=1439KiB/s (1473kB/s)(14.1MiB/10010msec) 00:34:36.094 slat (usec): min=8, max=117, avg=29.66, stdev=13.81 00:34:36.094 clat (msec): min=11, max=448, avg=44.22, stdev=51.69 00:34:36.094 lat (msec): min=11, max=448, avg=44.25, stdev=51.69 00:34:36.094 clat percentiles (msec): 00:34:36.094 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.094 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.094 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 39], 00:34:36.094 | 99.00th=[ 317], 99.50th=[ 321], 99.90th=[ 397], 99.95th=[ 447], 00:34:36.094 | 99.99th=[ 447] 00:34:36.094 bw ( KiB/s): min= 128, max= 1920, per=4.03%, avg=1408.16, stdev=754.64, samples=19 00:34:36.094 iops : min= 32, max= 480, avg=352.00, stdev=188.65, samples=19 00:34:36.094 lat (msec) : 20=0.06%, 50=95.00%, 100=0.50%, 250=0.94%, 500=3.50% 00:34:36.094 cpu : usr=96.41%, sys=2.02%, ctx=185, majf=0, minf=17 00:34:36.094 IO depths : 1=1.2%, 2=7.5%, 4=24.9%, 8=55.1%, 16=11.2%, 32=0.0%, >=64=0.0% 00:34:36.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.094 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.094 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.094 filename0: (groupid=0, jobs=1): err= 0: pid=1550490: Thu Jul 25 23:40:32 2024 00:34:36.094 read: IOPS=359, BW=1440KiB/s (1474kB/s)(14.1MiB/10001msec) 00:34:36.094 slat (nsec): min=8793, max=81228, avg=36340.72, stdev=13071.83 00:34:36.094 clat (msec): min=27, max=321, avg=44.12, stdev=50.17 00:34:36.094 lat (msec): min=27, max=321, avg=44.15, stdev=50.17 00:34:36.094 clat percentiles (msec): 00:34:36.094 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.094 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.094 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 61], 00:34:36.094 | 99.00th=[ 300], 99.50th=[ 313], 99.90th=[ 321], 99.95th=[ 321], 00:34:36.094 | 99.99th=[ 321] 00:34:36.094 bw ( KiB/s): min= 128, max= 1920, per=4.05%, avg=1414.89, stdev=743.33, samples=19 00:34:36.094 iops : min= 32, max= 480, avg=353.68, stdev=185.82, samples=19 00:34:36.094 lat (msec) : 50=94.67%, 100=1.33%, 250=0.44%, 500=3.56% 00:34:36.094 cpu : usr=97.70%, sys=1.65%, ctx=86, majf=0, minf=16 00:34:36.094 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:36.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.094 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.094 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.094 filename0: (groupid=0, jobs=1): err= 0: pid=1550491: Thu Jul 25 23:40:32 2024 00:34:36.094 read: IOPS=370, BW=1481KiB/s (1516kB/s)(14.5MiB/10026msec) 00:34:36.094 slat (usec): min=3, max=105, avg=39.18, stdev=23.19 00:34:36.094 clat (msec): min=8, max=289, avg=42.87, stdev=39.97 00:34:36.094 lat (msec): min=8, max=289, avg=42.91, stdev=39.97 00:34:36.094 clat percentiles (msec): 00:34:36.094 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.094 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.094 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 153], 00:34:36.094 | 99.00th=[ 236], 99.50th=[ 245], 99.90th=[ 249], 99.95th=[ 288], 00:34:36.094 | 99.99th=[ 288] 00:34:36.094 bw ( KiB/s): min= 256, max= 2052, per=4.23%, avg=1478.60, stdev=706.47, samples=20 00:34:36.094 iops : min= 64, max= 513, avg=369.65, stdev=176.62, samples=20 00:34:36.094 lat (msec) : 10=0.86%, 20=0.05%, 50=93.05%, 250=5.98%, 500=0.05% 00:34:36.094 cpu : usr=96.67%, sys=1.99%, ctx=80, majf=0, minf=14 00:34:36.094 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:36.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.094 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.094 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.094 filename0: (groupid=0, jobs=1): err= 0: pid=1550492: Thu Jul 25 23:40:32 2024 00:34:36.094 read: IOPS=360, BW=1444KiB/s (1478kB/s)(14.1MiB/10018msec) 00:34:36.094 slat (usec): min=8, max=120, avg=38.20, stdev=15.37 00:34:36.094 clat (msec): min=24, max=429, avg=44.00, stdev=49.89 00:34:36.094 lat (msec): min=24, max=429, avg=44.04, stdev=49.89 00:34:36.094 clat percentiles (msec): 00:34:36.094 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.094 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.094 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 46], 00:34:36.094 | 99.00th=[ 313], 99.50th=[ 313], 99.90th=[ 397], 99.95th=[ 430], 00:34:36.094 | 99.99th=[ 430] 00:34:36.094 bw ( KiB/s): min= 128, max= 1920, per=4.12%, avg=1440.00, stdev=727.49, samples=20 00:34:36.094 iops : min= 32, max= 480, avg=360.00, stdev=181.87, samples=20 00:34:36.094 lat (msec) : 50=95.13%, 100=0.44%, 250=1.44%, 500=2.99% 00:34:36.094 cpu : usr=98.25%, sys=1.35%, ctx=17, majf=0, minf=21 00:34:36.094 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:36.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.094 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.094 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.094 filename0: (groupid=0, jobs=1): err= 0: pid=1550493: Thu Jul 25 23:40:32 2024 00:34:36.094 read: IOPS=365, BW=1462KiB/s (1497kB/s)(14.3MiB/10012msec) 00:34:36.094 slat (usec): min=8, max=114, avg=33.35, stdev=14.42 00:34:36.094 clat (msec): min=22, max=327, avg=43.49, stdev=43.21 00:34:36.094 lat (msec): min=22, max=327, avg=43.53, stdev=43.21 00:34:36.094 clat percentiles (msec): 00:34:36.094 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.094 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.094 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 163], 00:34:36.094 | 99.00th=[ 268], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 326], 00:34:36.094 | 99.99th=[ 326] 00:34:36.094 bw ( KiB/s): min= 176, max= 1920, per=4.17%, avg=1457.60, stdev=709.24, samples=20 00:34:36.094 iops : min= 44, max= 480, avg=364.40, stdev=177.31, samples=20 00:34:36.094 lat (msec) : 50=94.43%, 100=0.16%, 250=4.26%, 500=1.15% 00:34:36.094 cpu : usr=98.24%, sys=1.37%, ctx=17, majf=0, minf=18 00:34:36.094 IO depths : 1=5.9%, 2=11.8%, 4=24.1%, 8=51.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:34:36.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.094 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.094 issued rwts: total=3660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.094 filename0: (groupid=0, jobs=1): err= 0: pid=1550494: Thu Jul 25 23:40:32 2024 00:34:36.094 read: IOPS=360, BW=1441KiB/s (1475kB/s)(14.1MiB/10010msec) 00:34:36.094 slat (usec): min=8, max=109, avg=39.56, stdev=15.75 00:34:36.094 clat (msec): min=9, max=347, avg=44.03, stdev=50.18 00:34:36.094 lat (msec): min=9, max=347, avg=44.07, stdev=50.18 00:34:36.094 clat percentiles (msec): 00:34:36.094 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.094 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.094 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 60], 00:34:36.094 | 99.00th=[ 300], 99.50th=[ 321], 99.90th=[ 330], 99.95th=[ 347], 00:34:36.094 | 99.99th=[ 347] 00:34:36.094 bw ( KiB/s): min= 128, max= 1920, per=4.05%, avg=1414.89, stdev=743.33, samples=19 00:34:36.094 iops : min= 32, max= 480, avg=353.68, stdev=185.82, samples=19 00:34:36.094 lat (msec) : 10=0.14%, 50=94.54%, 100=1.33%, 250=0.50%, 500=3.50% 00:34:36.094 cpu : usr=98.30%, sys=1.31%, ctx=18, majf=0, minf=33 00:34:36.094 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.0%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:36.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.094 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.094 issued rwts: total=3605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.094 filename0: (groupid=0, jobs=1): err= 0: pid=1550495: Thu Jul 25 23:40:32 2024 00:34:36.094 read: IOPS=360, BW=1444KiB/s (1478kB/s)(14.1MiB/10018msec) 00:34:36.094 slat (usec): min=4, max=129, avg=28.31, stdev=13.71 00:34:36.094 clat (msec): min=19, max=393, avg=44.09, stdev=50.32 00:34:36.094 lat (msec): min=19, max=393, avg=44.12, stdev=50.33 00:34:36.094 clat percentiles (msec): 00:34:36.094 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:34:36.094 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.094 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 44], 00:34:36.094 | 99.00th=[ 313], 99.50th=[ 321], 99.90th=[ 321], 99.95th=[ 393], 00:34:36.094 | 99.99th=[ 393] 00:34:36.094 bw ( KiB/s): min= 144, max= 1936, per=4.05%, avg=1414.74, stdev=738.28, samples=19 00:34:36.094 iops : min= 36, max= 484, avg=353.68, stdev=184.57, samples=19 00:34:36.094 lat (msec) : 20=0.19%, 50=94.94%, 100=0.88%, 250=0.55%, 500=3.43% 00:34:36.094 cpu : usr=95.39%, sys=2.85%, ctx=129, majf=0, minf=21 00:34:36.094 IO depths : 1=1.3%, 2=7.5%, 4=24.9%, 8=55.1%, 16=11.2%, 32=0.0%, >=64=0.0% 00:34:36.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.094 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.094 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.094 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.094 filename0: (groupid=0, jobs=1): err= 0: pid=1550496: Thu Jul 25 23:40:32 2024 00:34:36.094 read: IOPS=359, BW=1438KiB/s (1473kB/s)(14.1MiB/10013msec) 00:34:36.094 slat (usec): min=6, max=100, avg=34.83, stdev=15.63 00:34:36.094 clat (msec): min=13, max=377, avg=44.18, stdev=51.51 00:34:36.094 lat (msec): min=13, max=377, avg=44.22, stdev=51.51 00:34:36.094 clat percentiles (msec): 00:34:36.094 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.094 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.094 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 39], 00:34:36.094 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 380], 99.95th=[ 380], 00:34:36.094 | 99.99th=[ 380] 00:34:36.094 bw ( KiB/s): min= 128, max= 1920, per=4.10%, avg=1433.60, stdev=729.53, samples=20 00:34:36.094 iops : min= 32, max= 480, avg=358.40, stdev=182.38, samples=20 00:34:36.094 lat (msec) : 20=0.11%, 50=94.94%, 100=0.50%, 250=0.94%, 500=3.50% 00:34:36.094 cpu : usr=96.74%, sys=2.06%, ctx=155, majf=0, minf=29 00:34:36.094 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:36.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.094 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.095 filename1: (groupid=0, jobs=1): err= 0: pid=1550497: Thu Jul 25 23:40:32 2024 00:34:36.095 read: IOPS=360, BW=1440KiB/s (1475kB/s)(14.1MiB/10009msec) 00:34:36.095 slat (usec): min=8, max=111, avg=59.86, stdev=23.75 00:34:36.095 clat (msec): min=8, max=454, avg=43.98, stdev=52.35 00:34:36.095 lat (msec): min=8, max=454, avg=44.04, stdev=52.35 00:34:36.095 clat percentiles (msec): 00:34:36.095 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.095 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.095 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 44], 00:34:36.095 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 380], 99.95th=[ 456], 00:34:36.095 | 99.99th=[ 456] 00:34:36.095 bw ( KiB/s): min= 128, max= 1968, per=4.04%, avg=1409.84, stdev=756.07, samples=19 00:34:36.095 iops : min= 32, max= 492, avg=352.42, stdev=189.00, samples=19 00:34:36.095 lat (msec) : 10=0.39%, 20=0.06%, 50=94.67%, 100=0.44%, 250=0.94% 00:34:36.095 lat (msec) : 500=3.50% 00:34:36.095 cpu : usr=97.85%, sys=1.56%, ctx=29, majf=0, minf=26 00:34:36.095 IO depths : 1=1.3%, 2=7.4%, 4=24.7%, 8=55.4%, 16=11.2%, 32=0.0%, >=64=0.0% 00:34:36.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 issued rwts: total=3604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.095 filename1: (groupid=0, jobs=1): err= 0: pid=1550498: Thu Jul 25 23:40:32 2024 00:34:36.095 read: IOPS=364, BW=1459KiB/s (1494kB/s)(14.3MiB/10021msec) 00:34:36.095 slat (usec): min=8, max=121, avg=33.50, stdev=18.16 00:34:36.095 clat (msec): min=19, max=375, avg=43.59, stdev=46.56 00:34:36.095 lat (msec): min=19, max=375, avg=43.63, stdev=46.56 00:34:36.095 clat percentiles (msec): 00:34:36.095 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.095 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.095 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 88], 00:34:36.095 | 99.00th=[ 300], 99.50th=[ 321], 99.90th=[ 321], 99.95th=[ 376], 00:34:36.095 | 99.99th=[ 376] 00:34:36.095 bw ( KiB/s): min= 144, max= 1920, per=4.17%, avg=1455.20, stdev=720.62, samples=20 00:34:36.095 iops : min= 36, max= 480, avg=363.80, stdev=180.15, samples=20 00:34:36.095 lat (msec) : 20=0.27%, 50=94.47%, 100=0.27%, 250=3.07%, 500=1.92% 00:34:36.095 cpu : usr=98.50%, sys=1.10%, ctx=15, majf=0, minf=24 00:34:36.095 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:36.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 issued rwts: total=3654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.095 filename1: (groupid=0, jobs=1): err= 0: pid=1550499: Thu Jul 25 23:40:32 2024 00:34:36.095 read: IOPS=359, BW=1439KiB/s (1473kB/s)(14.1MiB/10008msec) 00:34:36.095 slat (usec): min=8, max=122, avg=32.67, stdev=12.85 00:34:36.095 clat (msec): min=12, max=378, avg=44.17, stdev=52.22 00:34:36.095 lat (msec): min=12, max=378, avg=44.20, stdev=52.22 00:34:36.095 clat percentiles (msec): 00:34:36.095 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.095 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.095 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 39], 00:34:36.095 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 380], 99.95th=[ 380], 00:34:36.095 | 99.99th=[ 380] 00:34:36.095 bw ( KiB/s): min= 128, max= 1920, per=4.03%, avg=1408.16, stdev=754.91, samples=19 00:34:36.095 iops : min= 32, max= 480, avg=352.00, stdev=188.71, samples=19 00:34:36.095 lat (msec) : 20=0.44%, 50=94.67%, 100=0.44%, 250=0.89%, 500=3.56% 00:34:36.095 cpu : usr=98.02%, sys=1.57%, ctx=21, majf=0, minf=22 00:34:36.095 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:36.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.095 filename1: (groupid=0, jobs=1): err= 0: pid=1550500: Thu Jul 25 23:40:32 2024 00:34:36.095 read: IOPS=401, BW=1605KiB/s (1644kB/s)(15.7MiB/10007msec) 00:34:36.095 slat (usec): min=4, max=122, avg=26.82, stdev=17.19 00:34:36.095 clat (msec): min=5, max=311, avg=39.65, stdev=41.84 00:34:36.095 lat (msec): min=5, max=311, avg=39.68, stdev=41.84 00:34:36.095 clat percentiles (msec): 00:34:36.095 | 1.00th=[ 13], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 26], 00:34:36.095 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.095 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 109], 00:34:36.095 | 99.00th=[ 249], 99.50th=[ 268], 99.90th=[ 300], 99.95th=[ 300], 00:34:36.095 | 99.99th=[ 313] 00:34:36.095 bw ( KiB/s): min= 240, max= 2816, per=4.58%, avg=1600.00, stdev=841.52, samples=20 00:34:36.095 iops : min= 60, max= 704, avg=400.00, stdev=210.38, samples=20 00:34:36.095 lat (msec) : 10=0.80%, 20=1.05%, 50=92.98%, 100=0.05%, 250=4.28% 00:34:36.095 lat (msec) : 500=0.85% 00:34:36.095 cpu : usr=97.94%, sys=1.66%, ctx=17, majf=0, minf=22 00:34:36.095 IO depths : 1=4.3%, 2=8.9%, 4=20.0%, 8=58.5%, 16=8.2%, 32=0.0%, >=64=0.0% 00:34:36.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 complete : 0=0.0%, 4=92.7%, 8=1.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 issued rwts: total=4016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.095 filename1: (groupid=0, jobs=1): err= 0: pid=1550501: Thu Jul 25 23:40:32 2024 00:34:36.095 read: IOPS=366, BW=1465KiB/s (1500kB/s)(14.3MiB/10029msec) 00:34:36.095 slat (usec): min=4, max=113, avg=30.29, stdev=14.15 00:34:36.095 clat (msec): min=20, max=310, avg=43.44, stdev=42.83 00:34:36.095 lat (msec): min=20, max=310, avg=43.48, stdev=42.83 00:34:36.095 clat percentiles (msec): 00:34:36.095 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.095 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.095 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 128], 00:34:36.095 | 99.00th=[ 255], 99.50th=[ 279], 99.90th=[ 313], 99.95th=[ 313], 00:34:36.095 | 99.99th=[ 313] 00:34:36.095 bw ( KiB/s): min= 256, max= 1920, per=4.18%, avg=1459.40, stdev=706.02, samples=20 00:34:36.095 iops : min= 64, max= 480, avg=364.85, stdev=176.51, samples=20 00:34:36.095 lat (msec) : 50=94.28%, 250=4.41%, 500=1.31% 00:34:36.095 cpu : usr=98.06%, sys=1.53%, ctx=15, majf=0, minf=22 00:34:36.095 IO depths : 1=5.9%, 2=11.9%, 4=24.2%, 8=51.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:34:36.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 issued rwts: total=3672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.095 filename1: (groupid=0, jobs=1): err= 0: pid=1550502: Thu Jul 25 23:40:32 2024 00:34:36.095 read: IOPS=361, BW=1444KiB/s (1479kB/s)(14.1MiB/10009msec) 00:34:36.095 slat (usec): min=8, max=119, avg=39.14, stdev=16.51 00:34:36.095 clat (msec): min=8, max=453, avg=43.95, stdev=51.70 00:34:36.095 lat (msec): min=8, max=453, avg=43.99, stdev=51.70 00:34:36.095 clat percentiles (msec): 00:34:36.095 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.095 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.095 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 59], 00:34:36.095 | 99.00th=[ 317], 99.50th=[ 376], 99.90th=[ 414], 99.95th=[ 456], 00:34:36.095 | 99.99th=[ 456] 00:34:36.095 bw ( KiB/s): min= 144, max= 1920, per=4.05%, avg=1414.89, stdev=752.80, samples=19 00:34:36.095 iops : min= 36, max= 480, avg=353.68, stdev=188.19, samples=19 00:34:36.095 lat (msec) : 10=0.39%, 50=94.58%, 100=0.61%, 250=1.22%, 500=3.21% 00:34:36.095 cpu : usr=98.33%, sys=1.27%, ctx=14, majf=0, minf=23 00:34:36.095 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:36.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 issued rwts: total=3614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.095 filename1: (groupid=0, jobs=1): err= 0: pid=1550503: Thu Jul 25 23:40:32 2024 00:34:36.095 read: IOPS=359, BW=1439KiB/s (1474kB/s)(14.1MiB/10004msec) 00:34:36.095 slat (usec): min=8, max=131, avg=48.50, stdev=21.09 00:34:36.095 clat (msec): min=25, max=397, avg=44.02, stdev=50.52 00:34:36.095 lat (msec): min=25, max=397, avg=44.07, stdev=50.52 00:34:36.095 clat percentiles (msec): 00:34:36.095 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.095 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.095 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 64], 00:34:36.095 | 99.00th=[ 305], 99.50th=[ 321], 99.90th=[ 351], 99.95th=[ 397], 00:34:36.095 | 99.99th=[ 397] 00:34:36.095 bw ( KiB/s): min= 128, max= 1920, per=4.03%, avg=1408.00, stdev=740.87, samples=19 00:34:36.095 iops : min= 32, max= 480, avg=352.00, stdev=185.22, samples=19 00:34:36.095 lat (msec) : 50=94.72%, 100=1.28%, 250=0.61%, 500=3.39% 00:34:36.095 cpu : usr=98.25%, sys=1.33%, ctx=15, majf=0, minf=28 00:34:36.095 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:36.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.095 filename1: (groupid=0, jobs=1): err= 0: pid=1550504: Thu Jul 25 23:40:32 2024 00:34:36.095 read: IOPS=360, BW=1444KiB/s (1478kB/s)(14.1MiB/10018msec) 00:34:36.095 slat (nsec): min=8159, max=93276, avg=40100.66, stdev=14327.92 00:34:36.095 clat (msec): min=25, max=397, avg=43.97, stdev=49.86 00:34:36.095 lat (msec): min=25, max=397, avg=44.01, stdev=49.85 00:34:36.095 clat percentiles (msec): 00:34:36.095 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.095 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.095 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 46], 00:34:36.095 | 99.00th=[ 300], 99.50th=[ 321], 99.90th=[ 351], 99.95th=[ 397], 00:34:36.095 | 99.99th=[ 397] 00:34:36.095 bw ( KiB/s): min= 128, max= 1920, per=4.12%, avg=1440.00, stdev=729.14, samples=20 00:34:36.095 iops : min= 32, max= 480, avg=360.00, stdev=182.28, samples=20 00:34:36.095 lat (msec) : 50=95.19%, 100=0.39%, 250=1.44%, 500=2.99% 00:34:36.095 cpu : usr=96.70%, sys=2.01%, ctx=105, majf=0, minf=20 00:34:36.095 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:36.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.095 filename2: (groupid=0, jobs=1): err= 0: pid=1550505: Thu Jul 25 23:40:32 2024 00:34:36.095 read: IOPS=366, BW=1467KiB/s (1502kB/s)(14.4MiB/10022msec) 00:34:36.095 slat (nsec): min=8073, max=99298, avg=30425.21, stdev=12891.11 00:34:36.095 clat (msec): min=19, max=316, avg=43.35, stdev=40.72 00:34:36.095 lat (msec): min=19, max=316, avg=43.38, stdev=40.72 00:34:36.095 clat percentiles (msec): 00:34:36.095 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.095 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.095 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 161], 00:34:36.095 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 317], 99.95th=[ 317], 00:34:36.095 | 99.99th=[ 317] 00:34:36.095 bw ( KiB/s): min= 224, max= 1920, per=4.19%, avg=1464.00, stdev=697.80, samples=20 00:34:36.095 iops : min= 56, max= 480, avg=366.00, stdev=174.45, samples=20 00:34:36.095 lat (msec) : 20=0.27%, 50=93.74%, 250=5.50%, 500=0.49% 00:34:36.095 cpu : usr=98.30%, sys=1.29%, ctx=15, majf=0, minf=25 00:34:36.095 IO depths : 1=5.9%, 2=11.8%, 4=24.0%, 8=51.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:34:36.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 issued rwts: total=3676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.095 filename2: (groupid=0, jobs=1): err= 0: pid=1550506: Thu Jul 25 23:40:32 2024 00:34:36.095 read: IOPS=365, BW=1463KiB/s (1498kB/s)(14.3MiB/10020msec) 00:34:36.095 slat (usec): min=4, max=136, avg=53.87, stdev=31.09 00:34:36.095 clat (msec): min=13, max=348, avg=43.31, stdev=44.11 00:34:36.095 lat (msec): min=13, max=348, avg=43.36, stdev=44.11 00:34:36.095 clat percentiles (msec): 00:34:36.095 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.095 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.095 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 110], 00:34:36.095 | 99.00th=[ 271], 99.50th=[ 288], 99.90th=[ 313], 99.95th=[ 351], 00:34:36.095 | 99.99th=[ 351] 00:34:36.095 bw ( KiB/s): min= 128, max= 2032, per=4.18%, avg=1459.35, stdev=712.86, samples=20 00:34:36.095 iops : min= 32, max= 508, avg=364.80, stdev=178.19, samples=20 00:34:36.095 lat (msec) : 20=0.44%, 50=93.94%, 100=0.38%, 250=3.93%, 500=1.31% 00:34:36.095 cpu : usr=98.05%, sys=1.40%, ctx=46, majf=0, minf=18 00:34:36.095 IO depths : 1=1.6%, 2=7.8%, 4=25.0%, 8=54.7%, 16=10.9%, 32=0.0%, >=64=0.0% 00:34:36.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 issued rwts: total=3664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.095 filename2: (groupid=0, jobs=1): err= 0: pid=1550507: Thu Jul 25 23:40:32 2024 00:34:36.095 read: IOPS=359, BW=1440KiB/s (1474kB/s)(14.1MiB/10002msec) 00:34:36.095 slat (usec): min=8, max=113, avg=39.37, stdev=14.37 00:34:36.095 clat (msec): min=26, max=321, avg=44.10, stdev=50.19 00:34:36.095 lat (msec): min=26, max=321, avg=44.14, stdev=50.18 00:34:36.095 clat percentiles (msec): 00:34:36.095 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.095 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.095 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 61], 00:34:36.095 | 99.00th=[ 300], 99.50th=[ 313], 99.90th=[ 321], 99.95th=[ 321], 00:34:36.095 | 99.99th=[ 321] 00:34:36.095 bw ( KiB/s): min= 128, max= 1920, per=4.05%, avg=1414.74, stdev=743.27, samples=19 00:34:36.095 iops : min= 32, max= 480, avg=353.68, stdev=185.82, samples=19 00:34:36.095 lat (msec) : 50=94.67%, 100=1.33%, 250=0.44%, 500=3.56% 00:34:36.095 cpu : usr=95.30%, sys=2.68%, ctx=127, majf=0, minf=21 00:34:36.095 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:36.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.095 filename2: (groupid=0, jobs=1): err= 0: pid=1550508: Thu Jul 25 23:40:32 2024 00:34:36.095 read: IOPS=358, BW=1433KiB/s (1468kB/s)(14.0MiB/10003msec) 00:34:36.095 slat (nsec): min=4297, max=70645, avg=30847.68, stdev=9502.19 00:34:36.095 clat (msec): min=27, max=428, avg=44.37, stdev=52.60 00:34:36.095 lat (msec): min=27, max=429, avg=44.40, stdev=52.60 00:34:36.095 clat percentiles (msec): 00:34:36.095 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.095 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.095 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 42], 00:34:36.095 | 99.00th=[ 317], 99.50th=[ 380], 99.90th=[ 388], 99.95th=[ 430], 00:34:36.095 | 99.99th=[ 430] 00:34:36.095 bw ( KiB/s): min= 128, max= 1920, per=4.03%, avg=1408.00, stdev=754.85, samples=19 00:34:36.095 iops : min= 32, max= 480, avg=352.00, stdev=188.71, samples=19 00:34:36.095 lat (msec) : 50=95.09%, 100=0.45%, 250=1.00%, 500=3.46% 00:34:36.095 cpu : usr=96.58%, sys=2.13%, ctx=73, majf=0, minf=21 00:34:36.095 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:36.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.095 issued rwts: total=3584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.095 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.095 filename2: (groupid=0, jobs=1): err= 0: pid=1550509: Thu Jul 25 23:40:32 2024 00:34:36.096 read: IOPS=362, BW=1450KiB/s (1485kB/s)(14.2MiB/10008msec) 00:34:36.096 slat (usec): min=8, max=105, avg=35.10, stdev=15.59 00:34:36.096 clat (msec): min=12, max=452, avg=43.82, stdev=51.44 00:34:36.096 lat (msec): min=12, max=452, avg=43.86, stdev=51.44 00:34:36.096 clat percentiles (msec): 00:34:36.096 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.096 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.096 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 54], 00:34:36.096 | 99.00th=[ 313], 99.50th=[ 321], 99.90th=[ 439], 99.95th=[ 451], 00:34:36.096 | 99.99th=[ 451] 00:34:36.096 bw ( KiB/s): min= 128, max= 2016, per=4.06%, avg=1419.79, stdev=762.89, samples=19 00:34:36.096 iops : min= 32, max= 504, avg=354.95, stdev=190.72, samples=19 00:34:36.096 lat (msec) : 20=0.44%, 50=94.43%, 100=0.72%, 250=1.05%, 500=3.36% 00:34:36.096 cpu : usr=95.62%, sys=2.73%, ctx=257, majf=0, minf=18 00:34:36.096 IO depths : 1=5.8%, 2=11.9%, 4=24.3%, 8=51.3%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:36.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.096 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.096 issued rwts: total=3628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.096 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.096 filename2: (groupid=0, jobs=1): err= 0: pid=1550510: Thu Jul 25 23:40:32 2024 00:34:36.096 read: IOPS=372, BW=1490KiB/s (1525kB/s)(14.6MiB/10011msec) 00:34:36.096 slat (nsec): min=4258, max=63573, avg=21846.22, stdev=9625.43 00:34:36.096 clat (msec): min=4, max=240, avg=42.78, stdev=38.96 00:34:36.096 lat (msec): min=4, max=240, avg=42.80, stdev=38.96 00:34:36.096 clat percentiles (msec): 00:34:36.096 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.096 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.096 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 163], 00:34:36.096 | 99.00th=[ 236], 99.50th=[ 239], 99.90th=[ 241], 99.95th=[ 241], 00:34:36.096 | 99.99th=[ 241] 00:34:36.096 bw ( KiB/s): min= 256, max= 2176, per=4.25%, avg=1484.80, stdev=697.88, samples=20 00:34:36.096 iops : min= 64, max= 544, avg=371.20, stdev=174.47, samples=20 00:34:36.096 lat (msec) : 10=1.29%, 20=0.19%, 50=92.09%, 100=0.86%, 250=5.58% 00:34:36.096 cpu : usr=96.61%, sys=2.44%, ctx=224, majf=0, minf=45 00:34:36.096 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:36.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.096 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.096 issued rwts: total=3728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.096 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.096 filename2: (groupid=0, jobs=1): err= 0: pid=1550511: Thu Jul 25 23:40:32 2024 00:34:36.096 read: IOPS=366, BW=1465KiB/s (1501kB/s)(14.3MiB/10018msec) 00:34:36.096 slat (nsec): min=7973, max=85054, avg=21541.48, stdev=13554.29 00:34:36.096 clat (msec): min=22, max=408, avg=43.50, stdev=43.71 00:34:36.096 lat (msec): min=22, max=408, avg=43.52, stdev=43.71 00:34:36.096 clat percentiles (msec): 00:34:36.096 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:34:36.096 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.096 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 110], 00:34:36.096 | 99.00th=[ 266], 99.50th=[ 313], 99.90th=[ 321], 99.95th=[ 409], 00:34:36.096 | 99.99th=[ 409] 00:34:36.096 bw ( KiB/s): min= 128, max= 1920, per=4.18%, avg=1461.60, stdev=709.89, samples=20 00:34:36.096 iops : min= 32, max= 480, avg=365.40, stdev=177.47, samples=20 00:34:36.096 lat (msec) : 50=94.28%, 100=0.33%, 250=4.31%, 500=1.09% 00:34:36.096 cpu : usr=97.64%, sys=1.94%, ctx=21, majf=0, minf=34 00:34:36.096 IO depths : 1=5.9%, 2=12.0%, 4=24.5%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:34:36.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.096 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.096 issued rwts: total=3670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.096 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.096 filename2: (groupid=0, jobs=1): err= 0: pid=1550512: Thu Jul 25 23:40:32 2024 00:34:36.096 read: IOPS=360, BW=1444KiB/s (1478kB/s)(14.1MiB/10018msec) 00:34:36.096 slat (usec): min=8, max=109, avg=36.89, stdev=14.90 00:34:36.096 clat (msec): min=21, max=393, avg=44.00, stdev=49.60 00:34:36.096 lat (msec): min=21, max=393, avg=44.04, stdev=49.59 00:34:36.096 clat percentiles (msec): 00:34:36.096 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:34:36.096 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:34:36.096 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 35], 95.00th=[ 46], 00:34:36.096 | 99.00th=[ 300], 99.50th=[ 313], 99.90th=[ 321], 99.95th=[ 393], 00:34:36.096 | 99.99th=[ 393] 00:34:36.096 bw ( KiB/s): min= 128, max= 1920, per=4.12%, avg=1440.00, stdev=726.18, samples=20 00:34:36.096 iops : min= 32, max= 480, avg=360.00, stdev=181.54, samples=20 00:34:36.096 lat (msec) : 50=95.13%, 100=0.44%, 250=1.44%, 500=2.99% 00:34:36.096 cpu : usr=95.70%, sys=2.58%, ctx=417, majf=0, minf=27 00:34:36.096 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:34:36.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.096 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.096 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.096 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:36.096 00:34:36.096 Run status group 0 (all jobs): 00:34:36.096 READ: bw=34.1MiB/s (35.8MB/s), 1433KiB/s-1605KiB/s (1468kB/s-1644kB/s), io=342MiB (359MB), run=10001-10029msec 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.096 bdev_null0 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.096 [2024-07-25 23:40:32.558695] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.096 bdev_null1 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:36.096 { 00:34:36.096 "params": { 00:34:36.096 "name": "Nvme$subsystem", 00:34:36.096 "trtype": "$TEST_TRANSPORT", 00:34:36.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:36.096 "adrfam": "ipv4", 00:34:36.096 "trsvcid": "$NVMF_PORT", 00:34:36.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:36.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:36.096 "hdgst": ${hdgst:-false}, 00:34:36.096 "ddgst": ${ddgst:-false} 00:34:36.096 }, 00:34:36.096 "method": "bdev_nvme_attach_controller" 00:34:36.096 } 00:34:36.096 EOF 00:34:36.096 )") 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:36.096 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:36.097 { 00:34:36.097 "params": { 00:34:36.097 "name": "Nvme$subsystem", 00:34:36.097 "trtype": "$TEST_TRANSPORT", 00:34:36.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:36.097 "adrfam": "ipv4", 00:34:36.097 "trsvcid": "$NVMF_PORT", 00:34:36.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:36.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:36.097 "hdgst": ${hdgst:-false}, 00:34:36.097 "ddgst": ${ddgst:-false} 00:34:36.097 }, 00:34:36.097 "method": "bdev_nvme_attach_controller" 00:34:36.097 } 00:34:36.097 EOF 00:34:36.097 )") 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:36.097 "params": { 00:34:36.097 "name": "Nvme0", 00:34:36.097 "trtype": "tcp", 00:34:36.097 "traddr": "10.0.0.2", 00:34:36.097 "adrfam": "ipv4", 00:34:36.097 "trsvcid": "4420", 00:34:36.097 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:36.097 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:36.097 "hdgst": false, 00:34:36.097 "ddgst": false 00:34:36.097 }, 00:34:36.097 "method": "bdev_nvme_attach_controller" 00:34:36.097 },{ 00:34:36.097 "params": { 00:34:36.097 "name": "Nvme1", 00:34:36.097 "trtype": "tcp", 00:34:36.097 "traddr": "10.0.0.2", 00:34:36.097 "adrfam": "ipv4", 00:34:36.097 "trsvcid": "4420", 00:34:36.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:36.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:36.097 "hdgst": false, 00:34:36.097 "ddgst": false 00:34:36.097 }, 00:34:36.097 "method": "bdev_nvme_attach_controller" 00:34:36.097 }' 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:36.097 23:40:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.097 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:36.097 ... 00:34:36.097 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:36.097 ... 00:34:36.097 fio-3.35 00:34:36.097 Starting 4 threads 00:34:36.097 EAL: No free 2048 kB hugepages reported on node 1 00:34:41.360 00:34:41.360 filename0: (groupid=0, jobs=1): err= 0: pid=1551769: Thu Jul 25 23:40:38 2024 00:34:41.360 read: IOPS=1841, BW=14.4MiB/s (15.1MB/s)(72.0MiB/5002msec) 00:34:41.360 slat (nsec): min=4159, max=66774, avg=14203.05, stdev=7852.17 00:34:41.360 clat (usec): min=870, max=7996, avg=4296.22, stdev=652.82 00:34:41.360 lat (usec): min=883, max=8015, avg=4310.42, stdev=652.33 00:34:41.360 clat percentiles (usec): 00:34:41.360 | 1.00th=[ 2900], 5.00th=[ 3458], 10.00th=[ 3720], 20.00th=[ 3949], 00:34:41.361 | 30.00th=[ 4047], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:34:41.361 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4948], 95.00th=[ 5669], 00:34:41.361 | 99.00th=[ 6652], 99.50th=[ 6980], 99.90th=[ 7439], 99.95th=[ 7504], 00:34:41.361 | 99.99th=[ 8029] 00:34:41.361 bw ( KiB/s): min=14236, max=15264, per=24.98%, avg=14727.60, stdev=294.82, samples=10 00:34:41.361 iops : min= 1779, max= 1908, avg=1840.90, stdev=36.95, samples=10 00:34:41.361 lat (usec) : 1000=0.03% 00:34:41.361 lat (msec) : 2=0.30%, 4=24.51%, 10=75.15% 00:34:41.361 cpu : usr=94.20%, sys=5.18%, ctx=14, majf=0, minf=79 00:34:41.361 IO depths : 1=0.1%, 2=7.9%, 4=64.6%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:41.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.361 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.361 issued rwts: total=9211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.361 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:41.361 filename0: (groupid=0, jobs=1): err= 0: pid=1551770: Thu Jul 25 23:40:38 2024 00:34:41.361 read: IOPS=1921, BW=15.0MiB/s (15.7MB/s)(75.1MiB/5004msec) 00:34:41.361 slat (nsec): min=4084, max=64483, avg=12412.99, stdev=6347.56 00:34:41.361 clat (usec): min=1082, max=7870, avg=4122.14, stdev=671.82 00:34:41.361 lat (usec): min=1095, max=7883, avg=4134.55, stdev=671.87 00:34:41.361 clat percentiles (usec): 00:34:41.361 | 1.00th=[ 2737], 5.00th=[ 3130], 10.00th=[ 3359], 20.00th=[ 3621], 00:34:41.361 | 30.00th=[ 3818], 40.00th=[ 4015], 50.00th=[ 4146], 60.00th=[ 4228], 00:34:41.361 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 5473], 00:34:41.361 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 7504], 99.95th=[ 7767], 00:34:41.361 | 99.99th=[ 7898] 00:34:41.361 bw ( KiB/s): min=14384, max=16256, per=26.07%, avg=15372.80, stdev=614.74, samples=10 00:34:41.361 iops : min= 1798, max= 2032, avg=1921.60, stdev=76.84, samples=10 00:34:41.361 lat (msec) : 2=0.14%, 4=39.05%, 10=60.82% 00:34:41.361 cpu : usr=93.84%, sys=5.56%, ctx=17, majf=0, minf=73 00:34:41.361 IO depths : 1=0.1%, 2=7.4%, 4=64.8%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:41.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.361 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.361 issued rwts: total=9616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.361 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:41.361 filename1: (groupid=0, jobs=1): err= 0: pid=1551771: Thu Jul 25 23:40:38 2024 00:34:41.361 read: IOPS=1839, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5003msec) 00:34:41.361 slat (nsec): min=4007, max=65671, avg=13328.14, stdev=7306.39 00:34:41.361 clat (usec): min=932, max=8121, avg=4304.64, stdev=718.30 00:34:41.361 lat (usec): min=940, max=8134, avg=4317.97, stdev=717.93 00:34:41.361 clat percentiles (usec): 00:34:41.361 | 1.00th=[ 2933], 5.00th=[ 3425], 10.00th=[ 3654], 20.00th=[ 3851], 00:34:41.361 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4293], 00:34:41.361 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5145], 95.00th=[ 5997], 00:34:41.361 | 99.00th=[ 6783], 99.50th=[ 7177], 99.90th=[ 7701], 99.95th=[ 7898], 00:34:41.361 | 99.99th=[ 8094] 00:34:41.361 bw ( KiB/s): min=13904, max=15248, per=24.95%, avg=14714.80, stdev=444.91, samples=10 00:34:41.361 iops : min= 1738, max= 1906, avg=1839.30, stdev=55.64, samples=10 00:34:41.361 lat (usec) : 1000=0.03% 00:34:41.361 lat (msec) : 2=0.28%, 4=27.46%, 10=72.23% 00:34:41.361 cpu : usr=94.46%, sys=4.90%, ctx=12, majf=0, minf=117 00:34:41.361 IO depths : 1=0.1%, 2=7.4%, 4=65.3%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:41.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.361 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.361 issued rwts: total=9203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.361 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:41.361 filename1: (groupid=0, jobs=1): err= 0: pid=1551772: Thu Jul 25 23:40:38 2024 00:34:41.361 read: IOPS=1810, BW=14.1MiB/s (14.8MB/s)(71.3MiB/5042msec) 00:34:41.361 slat (nsec): min=5339, max=69061, avg=15651.53, stdev=8210.24 00:34:41.361 clat (usec): min=790, max=42159, avg=4334.23, stdev=768.96 00:34:41.361 lat (usec): min=802, max=42179, avg=4349.88, stdev=768.38 00:34:41.361 clat percentiles (usec): 00:34:41.361 | 1.00th=[ 2868], 5.00th=[ 3556], 10.00th=[ 3785], 20.00th=[ 3949], 00:34:41.361 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:34:41.361 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4948], 95.00th=[ 5800], 00:34:41.361 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 7504], 99.95th=[ 7635], 00:34:41.361 | 99.99th=[42206] 00:34:41.361 bw ( KiB/s): min=13840, max=15120, per=24.78%, avg=14608.00, stdev=428.66, samples=10 00:34:41.361 iops : min= 1730, max= 1890, avg=1826.00, stdev=53.58, samples=10 00:34:41.361 lat (usec) : 1000=0.03% 00:34:41.361 lat (msec) : 2=0.16%, 4=23.39%, 10=76.40%, 50=0.01% 00:34:41.361 cpu : usr=93.39%, sys=5.34%, ctx=132, majf=0, minf=60 00:34:41.361 IO depths : 1=0.1%, 2=6.7%, 4=66.0%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:41.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.361 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.361 issued rwts: total=9131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.361 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:41.361 00:34:41.361 Run status group 0 (all jobs): 00:34:41.361 READ: bw=57.6MiB/s (60.4MB/s), 14.1MiB/s-15.0MiB/s (14.8MB/s-15.7MB/s), io=290MiB (304MB), run=5002-5042msec 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.361 00:34:41.361 real 0m24.336s 00:34:41.361 user 4m31.262s 00:34:41.361 sys 0m7.346s 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:41.361 23:40:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.361 ************************************ 00:34:41.361 END TEST fio_dif_rand_params 00:34:41.361 ************************************ 00:34:41.361 23:40:38 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:41.361 23:40:38 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:41.361 23:40:38 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:41.361 23:40:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:41.361 ************************************ 00:34:41.361 START TEST fio_dif_digest 00:34:41.361 ************************************ 00:34:41.361 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:34:41.361 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:41.361 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:41.361 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:41.361 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:41.362 bdev_null0 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:41.362 [2024-07-25 23:40:38.932625] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:41.362 { 00:34:41.362 "params": { 00:34:41.362 "name": "Nvme$subsystem", 00:34:41.362 "trtype": "$TEST_TRANSPORT", 00:34:41.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.362 "adrfam": "ipv4", 00:34:41.362 "trsvcid": "$NVMF_PORT", 00:34:41.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.362 "hdgst": ${hdgst:-false}, 00:34:41.362 "ddgst": ${ddgst:-false} 00:34:41.362 }, 00:34:41.362 "method": "bdev_nvme_attach_controller" 00:34:41.362 } 00:34:41.362 EOF 00:34:41.362 )") 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:41.362 "params": { 00:34:41.362 "name": "Nvme0", 00:34:41.362 "trtype": "tcp", 00:34:41.362 "traddr": "10.0.0.2", 00:34:41.362 "adrfam": "ipv4", 00:34:41.362 "trsvcid": "4420", 00:34:41.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:41.362 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:41.362 "hdgst": true, 00:34:41.362 "ddgst": true 00:34:41.362 }, 00:34:41.362 "method": "bdev_nvme_attach_controller" 00:34:41.362 }' 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:41.362 23:40:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.621 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:41.621 ... 00:34:41.621 fio-3.35 00:34:41.621 Starting 3 threads 00:34:41.621 EAL: No free 2048 kB hugepages reported on node 1 00:34:53.809 00:34:53.809 filename0: (groupid=0, jobs=1): err= 0: pid=1552648: Thu Jul 25 23:40:49 2024 00:34:53.809 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(269MiB/10048msec) 00:34:53.809 slat (nsec): min=4699, max=90120, avg=14368.74, stdev=2280.24 00:34:53.809 clat (usec): min=10575, max=52685, avg=13975.00, stdev=1477.22 00:34:53.809 lat (usec): min=10590, max=52700, avg=13989.37, stdev=1477.16 00:34:53.809 clat percentiles (usec): 00:34:53.809 | 1.00th=[11863], 5.00th=[12387], 10.00th=[12649], 20.00th=[13173], 00:34:53.809 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14222], 00:34:53.809 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15139], 95.00th=[15533], 00:34:53.809 | 99.00th=[16581], 99.50th=[16909], 99.90th=[19268], 99.95th=[47449], 00:34:53.809 | 99.99th=[52691] 00:34:53.809 bw ( KiB/s): min=26880, max=28160, per=33.96%, avg=27507.20, stdev=347.21, samples=20 00:34:53.809 iops : min= 210, max= 220, avg=214.90, stdev= 2.71, samples=20 00:34:53.809 lat (msec) : 20=99.91%, 50=0.05%, 100=0.05% 00:34:53.809 cpu : usr=92.86%, sys=6.65%, ctx=27, majf=0, minf=142 00:34:53.809 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.809 issued rwts: total=2151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.809 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:53.809 filename0: (groupid=0, jobs=1): err= 0: pid=1552649: Thu Jul 25 23:40:49 2024 00:34:53.809 read: IOPS=206, BW=25.8MiB/s (27.0MB/s)(259MiB/10047msec) 00:34:53.809 slat (nsec): min=5278, max=44634, avg=14695.23, stdev=1828.82 00:34:53.809 clat (usec): min=11136, max=51044, avg=14521.04, stdev=1476.80 00:34:53.809 lat (usec): min=11152, max=51059, avg=14535.73, stdev=1476.76 00:34:53.809 clat percentiles (usec): 00:34:53.809 | 1.00th=[12125], 5.00th=[12911], 10.00th=[13304], 20.00th=[13698], 00:34:53.809 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:34:53.809 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 00:34:53.809 | 99.00th=[17171], 99.50th=[17957], 99.90th=[22676], 99.95th=[46400], 00:34:53.809 | 99.99th=[51119] 00:34:53.809 bw ( KiB/s): min=24832, max=27392, per=32.68%, avg=26470.40, stdev=507.94, samples=20 00:34:53.809 iops : min= 194, max= 214, avg=206.80, stdev= 3.97, samples=20 00:34:53.809 lat (msec) : 20=99.86%, 50=0.10%, 100=0.05% 00:34:53.810 cpu : usr=92.47%, sys=7.04%, ctx=25, majf=0, minf=162 00:34:53.810 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.810 issued rwts: total=2070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.810 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:53.810 filename0: (groupid=0, jobs=1): err= 0: pid=1552650: Thu Jul 25 23:40:49 2024 00:34:53.810 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(267MiB/10046msec) 00:34:53.810 slat (nsec): min=5262, max=40746, avg=14517.28, stdev=1751.65 00:34:53.810 clat (usec): min=10935, max=53950, avg=14057.56, stdev=1545.72 00:34:53.810 lat (usec): min=10950, max=53963, avg=14072.07, stdev=1545.64 00:34:53.810 clat percentiles (usec): 00:34:53.810 | 1.00th=[11731], 5.00th=[12387], 10.00th=[12780], 20.00th=[13173], 00:34:53.810 | 30.00th=[13566], 40.00th=[13829], 50.00th=[13960], 60.00th=[14222], 00:34:53.810 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15270], 95.00th=[15795], 00:34:53.810 | 99.00th=[16581], 99.50th=[16909], 99.90th=[18482], 99.95th=[51119], 00:34:53.810 | 99.99th=[53740] 00:34:53.810 bw ( KiB/s): min=26164, max=28160, per=33.75%, avg=27343.40, stdev=502.79, samples=20 00:34:53.810 iops : min= 204, max= 220, avg=213.60, stdev= 3.98, samples=20 00:34:53.810 lat (msec) : 20=99.91%, 100=0.09% 00:34:53.810 cpu : usr=93.17%, sys=6.33%, ctx=46, majf=0, minf=167 00:34:53.810 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.810 issued rwts: total=2138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.810 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:53.810 00:34:53.810 Run status group 0 (all jobs): 00:34:53.810 READ: bw=79.1MiB/s (82.9MB/s), 25.8MiB/s-26.8MiB/s (27.0MB/s-28.1MB/s), io=795MiB (833MB), run=10046-10048msec 00:34:53.810 23:40:50 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:53.810 23:40:50 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:53.810 23:40:50 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:53.810 23:40:50 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:53.810 23:40:50 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:53.810 23:40:50 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:53.810 23:40:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.810 23:40:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:53.810 23:40:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.810 23:40:50 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:53.810 23:40:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.810 23:40:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:53.810 23:40:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.810 00:34:53.810 real 0m11.301s 00:34:53.810 user 0m29.244s 00:34:53.810 sys 0m2.324s 00:34:53.810 23:40:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:53.810 23:40:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:53.810 ************************************ 00:34:53.810 END TEST fio_dif_digest 00:34:53.810 ************************************ 00:34:53.810 23:40:50 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:53.810 23:40:50 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:53.810 23:40:50 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:53.810 23:40:50 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:34:53.810 23:40:50 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:53.810 23:40:50 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:34:53.810 23:40:50 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:53.810 23:40:50 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:53.810 rmmod nvme_tcp 00:34:53.810 rmmod nvme_fabrics 00:34:53.810 rmmod nvme_keyring 00:34:53.810 23:40:50 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:53.810 23:40:50 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:34:53.810 23:40:50 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:34:53.810 23:40:50 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1546599 ']' 00:34:53.810 23:40:50 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1546599 00:34:53.810 23:40:50 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1546599 ']' 00:34:53.810 23:40:50 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1546599 00:34:53.810 23:40:50 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:34:53.810 23:40:50 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:53.810 23:40:50 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1546599 00:34:53.810 23:40:50 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:53.810 23:40:50 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:53.810 23:40:50 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1546599' 00:34:53.810 killing process with pid 1546599 00:34:53.810 23:40:50 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1546599 00:34:53.810 23:40:50 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1546599 00:34:53.810 23:40:50 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:34:53.810 23:40:50 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:53.810 Waiting for block devices as requested 00:34:54.069 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:54.069 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:54.069 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:54.327 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:54.327 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:54.327 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:54.585 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:54.585 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:54.585 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:54.585 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:54.844 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:54.844 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:54.844 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:54.844 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:55.103 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:55.103 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:55.103 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:55.362 23:40:52 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:55.362 23:40:52 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:55.362 23:40:52 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:55.362 23:40:52 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:55.362 23:40:52 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.362 23:40:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:55.362 23:40:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:57.263 23:40:54 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:57.263 00:34:57.263 real 1m6.831s 00:34:57.263 user 6m27.775s 00:34:57.263 sys 0m19.168s 00:34:57.263 23:40:54 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:57.263 23:40:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:57.263 ************************************ 00:34:57.263 END TEST nvmf_dif 00:34:57.263 ************************************ 00:34:57.263 23:40:54 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:57.263 23:40:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:57.263 23:40:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:57.263 23:40:54 -- common/autotest_common.sh@10 -- # set +x 00:34:57.263 ************************************ 00:34:57.263 START TEST nvmf_abort_qd_sizes 00:34:57.263 ************************************ 00:34:57.263 23:40:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:57.521 * Looking for test storage... 00:34:57.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:57.521 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:57.522 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:57.522 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:57.522 23:40:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:57.522 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:57.522 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:57.522 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:57.522 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:57.522 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:57.522 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:57.522 23:40:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:57.522 23:40:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:57.522 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:57.522 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:57.522 23:40:55 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:34:57.522 23:40:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:59.423 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:59.423 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:59.423 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:59.423 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:59.424 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:59.424 23:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:59.424 23:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:59.424 23:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:59.424 23:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:59.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:59.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:34:59.424 00:34:59.424 --- 10.0.0.2 ping statistics --- 00:34:59.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.424 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:34:59.424 23:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:59.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:59.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:34:59.424 00:34:59.424 --- 10.0.0.1 ping statistics --- 00:34:59.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.424 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:34:59.424 23:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:59.424 23:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:34:59.424 23:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:59.424 23:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:00.801 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:00.801 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:00.801 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:00.801 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:00.801 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:00.801 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:00.801 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:00.801 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:00.801 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:00.801 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:00.801 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:00.801 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:00.801 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:00.801 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:00.801 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:00.801 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:01.734 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1557426 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1557426 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1557426 ']' 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:01.734 23:40:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:01.734 [2024-07-25 23:40:59.443986] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:01.734 [2024-07-25 23:40:59.444066] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:01.992 EAL: No free 2048 kB hugepages reported on node 1 00:35:01.992 [2024-07-25 23:40:59.481407] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:01.992 [2024-07-25 23:40:59.513303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:01.992 [2024-07-25 23:40:59.605099] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:01.992 [2024-07-25 23:40:59.605160] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:01.992 [2024-07-25 23:40:59.605176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:01.992 [2024-07-25 23:40:59.605190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:01.992 [2024-07-25 23:40:59.605202] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:01.992 [2024-07-25 23:40:59.605288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.992 [2024-07-25 23:40:59.605345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:01.992 [2024-07-25 23:40:59.605408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:01.992 [2024-07-25 23:40:59.605410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:02.249 23:40:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:02.249 ************************************ 00:35:02.249 START TEST spdk_target_abort 00:35:02.249 ************************************ 00:35:02.249 23:40:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:35:02.249 23:40:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:02.249 23:40:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:35:02.249 23:40:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.249 23:40:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:05.525 spdk_targetn1 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:05.525 [2024-07-25 23:41:02.635897] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:05.525 [2024-07-25 23:41:02.668188] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:05.525 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:05.526 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:05.526 23:41:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:05.526 EAL: No free 2048 kB hugepages reported on node 1 00:35:08.864 Initializing NVMe Controllers 00:35:08.864 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:08.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:08.864 Initialization complete. Launching workers. 00:35:08.864 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13163, failed: 0 00:35:08.864 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1219, failed to submit 11944 00:35:08.864 success 822, unsuccess 397, failed 0 00:35:08.864 23:41:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:08.864 23:41:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:08.864 EAL: No free 2048 kB hugepages reported on node 1 00:35:12.140 Initializing NVMe Controllers 00:35:12.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:12.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:12.140 Initialization complete. Launching workers. 00:35:12.140 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8569, failed: 0 00:35:12.140 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1242, failed to submit 7327 00:35:12.140 success 308, unsuccess 934, failed 0 00:35:12.140 23:41:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:12.140 23:41:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:12.140 EAL: No free 2048 kB hugepages reported on node 1 00:35:14.667 Initializing NVMe Controllers 00:35:14.667 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:14.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:14.667 Initialization complete. Launching workers. 00:35:14.667 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30537, failed: 0 00:35:14.667 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2685, failed to submit 27852 00:35:14.667 success 511, unsuccess 2174, failed 0 00:35:14.667 23:41:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:14.667 23:41:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.667 23:41:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:14.667 23:41:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.667 23:41:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:14.667 23:41:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.667 23:41:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:16.039 23:41:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.039 23:41:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1557426 00:35:16.039 23:41:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1557426 ']' 00:35:16.039 23:41:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1557426 00:35:16.039 23:41:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:35:16.039 23:41:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:16.039 23:41:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1557426 00:35:16.039 23:41:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:16.039 23:41:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:16.039 23:41:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1557426' 00:35:16.039 killing process with pid 1557426 00:35:16.039 23:41:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1557426 00:35:16.039 23:41:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1557426 00:35:16.297 00:35:16.297 real 0m14.114s 00:35:16.297 user 0m53.403s 00:35:16.297 sys 0m2.653s 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:16.297 ************************************ 00:35:16.297 END TEST spdk_target_abort 00:35:16.297 ************************************ 00:35:16.297 23:41:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:16.297 23:41:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:16.297 23:41:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:16.297 23:41:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:16.297 ************************************ 00:35:16.297 START TEST kernel_target_abort 00:35:16.297 ************************************ 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:16.297 23:41:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:17.232 Waiting for block devices as requested 00:35:17.492 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:17.492 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:17.492 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:17.750 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:17.750 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:17.750 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:17.750 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:18.008 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:18.008 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:18.008 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:18.008 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:18.266 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:18.266 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:18.266 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:18.525 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:18.525 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:18.525 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:18.784 No valid GPT data, bailing 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:18.784 00:35:18.784 Discovery Log Number of Records 2, Generation counter 2 00:35:18.784 =====Discovery Log Entry 0====== 00:35:18.784 trtype: tcp 00:35:18.784 adrfam: ipv4 00:35:18.784 subtype: current discovery subsystem 00:35:18.784 treq: not specified, sq flow control disable supported 00:35:18.784 portid: 1 00:35:18.784 trsvcid: 4420 00:35:18.784 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:18.784 traddr: 10.0.0.1 00:35:18.784 eflags: none 00:35:18.784 sectype: none 00:35:18.784 =====Discovery Log Entry 1====== 00:35:18.784 trtype: tcp 00:35:18.784 adrfam: ipv4 00:35:18.784 subtype: nvme subsystem 00:35:18.784 treq: not specified, sq flow control disable supported 00:35:18.784 portid: 1 00:35:18.784 trsvcid: 4420 00:35:18.784 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:18.784 traddr: 10.0.0.1 00:35:18.784 eflags: none 00:35:18.784 sectype: none 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:18.784 23:41:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:18.784 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.063 Initializing NVMe Controllers 00:35:22.063 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:22.063 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:22.063 Initialization complete. Launching workers. 00:35:22.063 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38014, failed: 0 00:35:22.063 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38014, failed to submit 0 00:35:22.063 success 0, unsuccess 38014, failed 0 00:35:22.063 23:41:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:22.063 23:41:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:22.063 EAL: No free 2048 kB hugepages reported on node 1 00:35:25.359 Initializing NVMe Controllers 00:35:25.359 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:25.359 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:25.359 Initialization complete. Launching workers. 00:35:25.359 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71907, failed: 0 00:35:25.359 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18138, failed to submit 53769 00:35:25.359 success 0, unsuccess 18138, failed 0 00:35:25.360 23:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:25.360 23:41:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:25.360 EAL: No free 2048 kB hugepages reported on node 1 00:35:28.637 Initializing NVMe Controllers 00:35:28.637 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:28.637 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:28.637 Initialization complete. Launching workers. 00:35:28.637 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75554, failed: 0 00:35:28.637 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18878, failed to submit 56676 00:35:28.637 success 0, unsuccess 18878, failed 0 00:35:28.637 23:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:28.637 23:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:28.637 23:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:35:28.637 23:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:28.637 23:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:28.637 23:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:28.637 23:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:28.637 23:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:28.638 23:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:28.638 23:41:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:29.202 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:29.202 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:29.202 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:29.460 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:29.460 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:29.460 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:29.460 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:29.460 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:29.460 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:29.460 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:29.460 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:29.460 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:29.460 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:29.460 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:29.460 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:29.460 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:30.394 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:30.394 00:35:30.394 real 0m14.118s 00:35:30.394 user 0m5.723s 00:35:30.394 sys 0m3.214s 00:35:30.394 23:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:30.394 23:41:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:30.394 ************************************ 00:35:30.394 END TEST kernel_target_abort 00:35:30.394 ************************************ 00:35:30.394 23:41:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:30.394 23:41:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:30.394 23:41:28 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:30.394 23:41:28 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:35:30.395 23:41:28 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:30.395 23:41:28 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:35:30.395 23:41:28 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:30.395 23:41:28 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:30.395 rmmod nvme_tcp 00:35:30.395 rmmod nvme_fabrics 00:35:30.653 rmmod nvme_keyring 00:35:30.653 23:41:28 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:30.653 23:41:28 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:35:30.653 23:41:28 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:35:30.653 23:41:28 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1557426 ']' 00:35:30.653 23:41:28 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1557426 00:35:30.653 23:41:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1557426 ']' 00:35:30.653 23:41:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1557426 00:35:30.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1557426) - No such process 00:35:30.653 23:41:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1557426 is not found' 00:35:30.653 Process with pid 1557426 is not found 00:35:30.653 23:41:28 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:30.653 23:41:28 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:31.633 Waiting for block devices as requested 00:35:31.633 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:31.891 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:31.891 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:31.891 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:32.149 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:32.149 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:32.149 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:32.149 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:32.408 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:32.408 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:32.408 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:32.409 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:32.667 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:32.667 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:32.667 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:32.667 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:32.925 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:32.925 23:41:30 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:32.925 23:41:30 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:32.925 23:41:30 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:32.925 23:41:30 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:32.925 23:41:30 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.925 23:41:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:32.925 23:41:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:35.458 23:41:32 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:35.458 00:35:35.458 real 0m37.636s 00:35:35.458 user 1m1.255s 00:35:35.458 sys 0m9.173s 00:35:35.458 23:41:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:35.458 23:41:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:35.458 ************************************ 00:35:35.458 END TEST nvmf_abort_qd_sizes 00:35:35.458 ************************************ 00:35:35.458 23:41:32 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:35.458 23:41:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:35.458 23:41:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:35.458 23:41:32 -- common/autotest_common.sh@10 -- # set +x 00:35:35.458 ************************************ 00:35:35.458 START TEST keyring_file 00:35:35.458 ************************************ 00:35:35.458 23:41:32 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:35.458 * Looking for test storage... 00:35:35.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:35.458 23:41:32 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:35.458 23:41:32 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:35.458 23:41:32 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:35.458 23:41:32 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:35.458 23:41:32 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:35.458 23:41:32 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:35.458 23:41:32 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.458 23:41:32 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.459 23:41:32 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.459 23:41:32 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:35.459 23:41:32 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@47 -- # : 0 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:35.459 23:41:32 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:35.459 23:41:32 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:35.459 23:41:32 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:35.459 23:41:32 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:35.459 23:41:32 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:35.459 23:41:32 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fYIEZvK2uE 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fYIEZvK2uE 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fYIEZvK2uE 00:35:35.459 23:41:32 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.fYIEZvK2uE 00:35:35.459 23:41:32 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BUxNMjiKB0 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:35.459 23:41:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BUxNMjiKB0 00:35:35.459 23:41:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BUxNMjiKB0 00:35:35.459 23:41:32 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.BUxNMjiKB0 00:35:35.459 23:41:32 keyring_file -- keyring/file.sh@30 -- # tgtpid=1563181 00:35:35.459 23:41:32 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:35.459 23:41:32 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1563181 00:35:35.459 23:41:32 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1563181 ']' 00:35:35.459 23:41:32 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.459 23:41:32 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:35.459 23:41:32 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.459 23:41:32 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:35.459 23:41:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:35.459 [2024-07-25 23:41:32.883071] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:35.459 [2024-07-25 23:41:32.883176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1563181 ] 00:35:35.459 EAL: No free 2048 kB hugepages reported on node 1 00:35:35.459 [2024-07-25 23:41:32.915335] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:35.459 [2024-07-25 23:41:32.941563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.459 [2024-07-25 23:41:33.029912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:35.718 23:41:33 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:35.718 [2024-07-25 23:41:33.269132] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:35.718 null0 00:35:35.718 [2024-07-25 23:41:33.301202] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:35.718 [2024-07-25 23:41:33.301687] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:35.718 [2024-07-25 23:41:33.309197] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.718 23:41:33 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:35.718 [2024-07-25 23:41:33.317207] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:35.718 request: 00:35:35.718 { 00:35:35.718 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:35.718 "secure_channel": false, 00:35:35.718 "listen_address": { 00:35:35.718 "trtype": "tcp", 00:35:35.718 "traddr": "127.0.0.1", 00:35:35.718 "trsvcid": "4420" 00:35:35.718 }, 00:35:35.718 "method": "nvmf_subsystem_add_listener", 00:35:35.718 "req_id": 1 00:35:35.718 } 00:35:35.718 Got JSON-RPC error response 00:35:35.718 response: 00:35:35.718 { 00:35:35.718 "code": -32602, 00:35:35.718 "message": "Invalid parameters" 00:35:35.718 } 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:35.718 23:41:33 keyring_file -- keyring/file.sh@46 -- # bperfpid=1563200 00:35:35.718 23:41:33 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:35.718 23:41:33 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1563200 /var/tmp/bperf.sock 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1563200 ']' 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:35.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:35.718 23:41:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:35.718 [2024-07-25 23:41:33.360418] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:35.718 [2024-07-25 23:41:33.360489] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1563200 ] 00:35:35.718 EAL: No free 2048 kB hugepages reported on node 1 00:35:35.718 [2024-07-25 23:41:33.390840] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:35.718 [2024-07-25 23:41:33.418469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.976 [2024-07-25 23:41:33.505472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.976 23:41:33 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:35.976 23:41:33 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:35.976 23:41:33 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fYIEZvK2uE 00:35:35.976 23:41:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fYIEZvK2uE 00:35:36.234 23:41:33 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BUxNMjiKB0 00:35:36.234 23:41:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BUxNMjiKB0 00:35:36.491 23:41:34 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:35:36.491 23:41:34 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:35:36.491 23:41:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.491 23:41:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:36.491 23:41:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.749 23:41:34 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.fYIEZvK2uE == \/\t\m\p\/\t\m\p\.\f\Y\I\E\Z\v\K\2\u\E ]] 00:35:36.749 23:41:34 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:35:36.749 23:41:34 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:36.749 23:41:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.749 23:41:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:36.749 23:41:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.006 23:41:34 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.BUxNMjiKB0 == \/\t\m\p\/\t\m\p\.\B\U\x\N\M\j\i\K\B\0 ]] 00:35:37.006 23:41:34 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:35:37.006 23:41:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:37.006 23:41:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:37.006 23:41:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:37.006 23:41:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.006 23:41:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:37.263 23:41:34 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:35:37.263 23:41:34 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:35:37.263 23:41:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:37.263 23:41:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:37.263 23:41:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:37.263 23:41:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:37.263 23:41:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.521 23:41:35 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:37.521 23:41:35 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:37.521 23:41:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:37.778 [2024-07-25 23:41:35.327255] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:37.778 nvme0n1 00:35:37.778 23:41:35 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:35:37.778 23:41:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:37.778 23:41:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:37.778 23:41:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:37.778 23:41:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.778 23:41:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:38.035 23:41:35 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:35:38.035 23:41:35 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:35:38.035 23:41:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:38.036 23:41:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:38.036 23:41:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:38.036 23:41:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:38.036 23:41:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:38.293 23:41:35 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:35:38.293 23:41:35 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:38.293 Running I/O for 1 seconds... 00:35:39.663 00:35:39.663 Latency(us) 00:35:39.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:39.663 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:39.663 nvme0n1 : 1.01 6952.37 27.16 0.00 0.00 18310.89 5728.33 27185.30 00:35:39.663 =================================================================================================================== 00:35:39.663 Total : 6952.37 27.16 0.00 0.00 18310.89 5728.33 27185.30 00:35:39.663 0 00:35:39.663 23:41:37 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:39.663 23:41:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:39.663 23:41:37 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:35:39.663 23:41:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:39.663 23:41:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:39.663 23:41:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.663 23:41:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.663 23:41:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:39.920 23:41:37 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:35:39.920 23:41:37 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:35:39.920 23:41:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:39.920 23:41:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:39.920 23:41:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.920 23:41:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.920 23:41:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:40.178 23:41:37 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:40.178 23:41:37 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:40.178 23:41:37 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:40.178 23:41:37 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:40.178 23:41:37 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:40.178 23:41:37 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:40.178 23:41:37 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:40.178 23:41:37 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:40.178 23:41:37 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:40.178 23:41:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:40.435 [2024-07-25 23:41:38.043613] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:40.435 [2024-07-25 23:41:38.044042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189a4e0 (107): Transport endpoint is not connected 00:35:40.435 [2024-07-25 23:41:38.045033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189a4e0 (9): Bad file descriptor 00:35:40.435 [2024-07-25 23:41:38.046032] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:40.435 [2024-07-25 23:41:38.046055] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:40.435 [2024-07-25 23:41:38.046079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:40.435 request: 00:35:40.435 { 00:35:40.435 "name": "nvme0", 00:35:40.435 "trtype": "tcp", 00:35:40.435 "traddr": "127.0.0.1", 00:35:40.435 "adrfam": "ipv4", 00:35:40.435 "trsvcid": "4420", 00:35:40.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:40.435 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:40.435 "prchk_reftag": false, 00:35:40.435 "prchk_guard": false, 00:35:40.435 "hdgst": false, 00:35:40.435 "ddgst": false, 00:35:40.435 "psk": "key1", 00:35:40.435 "method": "bdev_nvme_attach_controller", 00:35:40.435 "req_id": 1 00:35:40.435 } 00:35:40.435 Got JSON-RPC error response 00:35:40.435 response: 00:35:40.435 { 00:35:40.435 "code": -5, 00:35:40.435 "message": "Input/output error" 00:35:40.435 } 00:35:40.435 23:41:38 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:40.435 23:41:38 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:40.435 23:41:38 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:40.435 23:41:38 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:40.435 23:41:38 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:35:40.435 23:41:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:40.435 23:41:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:40.435 23:41:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:40.435 23:41:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:40.435 23:41:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:40.693 23:41:38 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:35:40.693 23:41:38 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:35:40.693 23:41:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:40.693 23:41:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:40.693 23:41:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:40.693 23:41:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:40.693 23:41:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:40.950 23:41:38 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:40.950 23:41:38 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:35:40.950 23:41:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:41.208 23:41:38 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:35:41.208 23:41:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:41.466 23:41:39 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:35:41.466 23:41:39 keyring_file -- keyring/file.sh@77 -- # jq length 00:35:41.466 23:41:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.724 23:41:39 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:35:41.724 23:41:39 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.fYIEZvK2uE 00:35:41.724 23:41:39 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.fYIEZvK2uE 00:35:41.724 23:41:39 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:41.724 23:41:39 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.fYIEZvK2uE 00:35:41.724 23:41:39 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:41.724 23:41:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:41.724 23:41:39 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:41.724 23:41:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:41.724 23:41:39 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fYIEZvK2uE 00:35:41.724 23:41:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fYIEZvK2uE 00:35:41.982 [2024-07-25 23:41:39.563928] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fYIEZvK2uE': 0100660 00:35:41.982 [2024-07-25 23:41:39.563967] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:41.982 request: 00:35:41.982 { 00:35:41.982 "name": "key0", 00:35:41.982 "path": "/tmp/tmp.fYIEZvK2uE", 00:35:41.982 "method": "keyring_file_add_key", 00:35:41.982 "req_id": 1 00:35:41.982 } 00:35:41.982 Got JSON-RPC error response 00:35:41.982 response: 00:35:41.982 { 00:35:41.982 "code": -1, 00:35:41.982 "message": "Operation not permitted" 00:35:41.982 } 00:35:41.982 23:41:39 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:41.982 23:41:39 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:41.982 23:41:39 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:41.982 23:41:39 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:41.982 23:41:39 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.fYIEZvK2uE 00:35:41.982 23:41:39 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fYIEZvK2uE 00:35:41.982 23:41:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fYIEZvK2uE 00:35:42.240 23:41:39 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.fYIEZvK2uE 00:35:42.240 23:41:39 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:35:42.240 23:41:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:42.240 23:41:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:42.240 23:41:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:42.240 23:41:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.240 23:41:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:42.498 23:41:40 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:35:42.498 23:41:40 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:42.498 23:41:40 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:42.498 23:41:40 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:42.498 23:41:40 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:42.498 23:41:40 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:42.498 23:41:40 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:42.498 23:41:40 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:42.498 23:41:40 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:42.498 23:41:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:42.756 [2024-07-25 23:41:40.346126] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.fYIEZvK2uE': No such file or directory 00:35:42.757 [2024-07-25 23:41:40.346167] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:42.757 [2024-07-25 23:41:40.346211] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:42.757 [2024-07-25 23:41:40.346223] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:42.757 [2024-07-25 23:41:40.346235] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:42.757 request: 00:35:42.757 { 00:35:42.757 "name": "nvme0", 00:35:42.757 "trtype": "tcp", 00:35:42.757 "traddr": "127.0.0.1", 00:35:42.757 "adrfam": "ipv4", 00:35:42.757 "trsvcid": "4420", 00:35:42.757 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:42.757 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:42.757 "prchk_reftag": false, 00:35:42.757 "prchk_guard": false, 00:35:42.757 "hdgst": false, 00:35:42.757 "ddgst": false, 00:35:42.757 "psk": "key0", 00:35:42.757 "method": "bdev_nvme_attach_controller", 00:35:42.757 "req_id": 1 00:35:42.757 } 00:35:42.757 Got JSON-RPC error response 00:35:42.757 response: 00:35:42.757 { 00:35:42.757 "code": -19, 00:35:42.757 "message": "No such device" 00:35:42.757 } 00:35:42.757 23:41:40 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:42.757 23:41:40 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:42.757 23:41:40 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:42.757 23:41:40 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:42.757 23:41:40 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:35:42.757 23:41:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:43.015 23:41:40 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:43.015 23:41:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:43.015 23:41:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:43.015 23:41:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:43.015 23:41:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:43.015 23:41:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:43.015 23:41:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sxni0yYJ31 00:35:43.015 23:41:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:43.015 23:41:40 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:43.015 23:41:40 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:35:43.015 23:41:40 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:43.015 23:41:40 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:43.015 23:41:40 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:35:43.015 23:41:40 keyring_file -- nvmf/common.sh@705 -- # python - 00:35:43.015 23:41:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sxni0yYJ31 00:35:43.015 23:41:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sxni0yYJ31 00:35:43.015 23:41:40 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.sxni0yYJ31 00:35:43.015 23:41:40 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sxni0yYJ31 00:35:43.015 23:41:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sxni0yYJ31 00:35:43.273 23:41:40 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:43.273 23:41:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:43.530 nvme0n1 00:35:43.530 23:41:41 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:35:43.530 23:41:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:43.530 23:41:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:43.788 23:41:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:43.788 23:41:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:43.788 23:41:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:43.788 23:41:41 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:35:43.788 23:41:41 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:35:43.788 23:41:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:44.352 23:41:41 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:35:44.352 23:41:41 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:35:44.352 23:41:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.352 23:41:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.352 23:41:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:44.352 23:41:42 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:35:44.352 23:41:42 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:35:44.352 23:41:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:44.352 23:41:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:44.352 23:41:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.352 23:41:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.352 23:41:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:44.609 23:41:42 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:35:44.609 23:41:42 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:44.609 23:41:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:44.867 23:41:42 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:35:44.867 23:41:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.867 23:41:42 keyring_file -- keyring/file.sh@104 -- # jq length 00:35:45.125 23:41:42 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:35:45.125 23:41:42 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sxni0yYJ31 00:35:45.125 23:41:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sxni0yYJ31 00:35:45.382 23:41:43 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BUxNMjiKB0 00:35:45.382 23:41:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BUxNMjiKB0 00:35:45.640 23:41:43 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:45.640 23:41:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:45.897 nvme0n1 00:35:45.897 23:41:43 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:35:45.897 23:41:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:46.462 23:41:43 keyring_file -- keyring/file.sh@112 -- # config='{ 00:35:46.462 "subsystems": [ 00:35:46.462 { 00:35:46.462 "subsystem": "keyring", 00:35:46.462 "config": [ 00:35:46.462 { 00:35:46.462 "method": "keyring_file_add_key", 00:35:46.462 "params": { 00:35:46.462 "name": "key0", 00:35:46.462 "path": "/tmp/tmp.sxni0yYJ31" 00:35:46.462 } 00:35:46.462 }, 00:35:46.462 { 00:35:46.462 "method": "keyring_file_add_key", 00:35:46.462 "params": { 00:35:46.462 "name": "key1", 00:35:46.462 "path": "/tmp/tmp.BUxNMjiKB0" 00:35:46.462 } 00:35:46.462 } 00:35:46.462 ] 00:35:46.462 }, 00:35:46.462 { 00:35:46.463 "subsystem": "iobuf", 00:35:46.463 "config": [ 00:35:46.463 { 00:35:46.463 "method": "iobuf_set_options", 00:35:46.463 "params": { 00:35:46.463 "small_pool_count": 8192, 00:35:46.463 "large_pool_count": 1024, 00:35:46.463 "small_bufsize": 8192, 00:35:46.463 "large_bufsize": 135168 00:35:46.463 } 00:35:46.463 } 00:35:46.463 ] 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "subsystem": "sock", 00:35:46.463 "config": [ 00:35:46.463 { 00:35:46.463 "method": "sock_set_default_impl", 00:35:46.463 "params": { 00:35:46.463 "impl_name": "posix" 00:35:46.463 } 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "method": "sock_impl_set_options", 00:35:46.463 "params": { 00:35:46.463 "impl_name": "ssl", 00:35:46.463 "recv_buf_size": 4096, 00:35:46.463 "send_buf_size": 4096, 00:35:46.463 "enable_recv_pipe": true, 00:35:46.463 "enable_quickack": false, 00:35:46.463 "enable_placement_id": 0, 00:35:46.463 "enable_zerocopy_send_server": true, 00:35:46.463 "enable_zerocopy_send_client": false, 00:35:46.463 "zerocopy_threshold": 0, 00:35:46.463 "tls_version": 0, 00:35:46.463 "enable_ktls": false 00:35:46.463 } 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "method": "sock_impl_set_options", 00:35:46.463 "params": { 00:35:46.463 "impl_name": "posix", 00:35:46.463 "recv_buf_size": 2097152, 00:35:46.463 "send_buf_size": 2097152, 00:35:46.463 "enable_recv_pipe": true, 00:35:46.463 "enable_quickack": false, 00:35:46.463 "enable_placement_id": 0, 00:35:46.463 "enable_zerocopy_send_server": true, 00:35:46.463 "enable_zerocopy_send_client": false, 00:35:46.463 "zerocopy_threshold": 0, 00:35:46.463 "tls_version": 0, 00:35:46.463 "enable_ktls": false 00:35:46.463 } 00:35:46.463 } 00:35:46.463 ] 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "subsystem": "vmd", 00:35:46.463 "config": [] 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "subsystem": "accel", 00:35:46.463 "config": [ 00:35:46.463 { 00:35:46.463 "method": "accel_set_options", 00:35:46.463 "params": { 00:35:46.463 "small_cache_size": 128, 00:35:46.463 "large_cache_size": 16, 00:35:46.463 "task_count": 2048, 00:35:46.463 "sequence_count": 2048, 00:35:46.463 "buf_count": 2048 00:35:46.463 } 00:35:46.463 } 00:35:46.463 ] 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "subsystem": "bdev", 00:35:46.463 "config": [ 00:35:46.463 { 00:35:46.463 "method": "bdev_set_options", 00:35:46.463 "params": { 00:35:46.463 "bdev_io_pool_size": 65535, 00:35:46.463 "bdev_io_cache_size": 256, 00:35:46.463 "bdev_auto_examine": true, 00:35:46.463 "iobuf_small_cache_size": 128, 00:35:46.463 "iobuf_large_cache_size": 16 00:35:46.463 } 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "method": "bdev_raid_set_options", 00:35:46.463 "params": { 00:35:46.463 "process_window_size_kb": 1024, 00:35:46.463 "process_max_bandwidth_mb_sec": 0 00:35:46.463 } 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "method": "bdev_iscsi_set_options", 00:35:46.463 "params": { 00:35:46.463 "timeout_sec": 30 00:35:46.463 } 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "method": "bdev_nvme_set_options", 00:35:46.463 "params": { 00:35:46.463 "action_on_timeout": "none", 00:35:46.463 "timeout_us": 0, 00:35:46.463 "timeout_admin_us": 0, 00:35:46.463 "keep_alive_timeout_ms": 10000, 00:35:46.463 "arbitration_burst": 0, 00:35:46.463 "low_priority_weight": 0, 00:35:46.463 "medium_priority_weight": 0, 00:35:46.463 "high_priority_weight": 0, 00:35:46.463 "nvme_adminq_poll_period_us": 10000, 00:35:46.463 "nvme_ioq_poll_period_us": 0, 00:35:46.463 "io_queue_requests": 512, 00:35:46.463 "delay_cmd_submit": true, 00:35:46.463 "transport_retry_count": 4, 00:35:46.463 "bdev_retry_count": 3, 00:35:46.463 "transport_ack_timeout": 0, 00:35:46.463 "ctrlr_loss_timeout_sec": 0, 00:35:46.463 "reconnect_delay_sec": 0, 00:35:46.463 "fast_io_fail_timeout_sec": 0, 00:35:46.463 "disable_auto_failback": false, 00:35:46.463 "generate_uuids": false, 00:35:46.463 "transport_tos": 0, 00:35:46.463 "nvme_error_stat": false, 00:35:46.463 "rdma_srq_size": 0, 00:35:46.463 "io_path_stat": false, 00:35:46.463 "allow_accel_sequence": false, 00:35:46.463 "rdma_max_cq_size": 0, 00:35:46.463 "rdma_cm_event_timeout_ms": 0, 00:35:46.463 "dhchap_digests": [ 00:35:46.463 "sha256", 00:35:46.463 "sha384", 00:35:46.463 "sha512" 00:35:46.463 ], 00:35:46.463 "dhchap_dhgroups": [ 00:35:46.463 "null", 00:35:46.463 "ffdhe2048", 00:35:46.463 "ffdhe3072", 00:35:46.463 "ffdhe4096", 00:35:46.463 "ffdhe6144", 00:35:46.463 "ffdhe8192" 00:35:46.463 ] 00:35:46.463 } 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "method": "bdev_nvme_attach_controller", 00:35:46.463 "params": { 00:35:46.463 "name": "nvme0", 00:35:46.463 "trtype": "TCP", 00:35:46.463 "adrfam": "IPv4", 00:35:46.463 "traddr": "127.0.0.1", 00:35:46.463 "trsvcid": "4420", 00:35:46.463 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:46.463 "prchk_reftag": false, 00:35:46.463 "prchk_guard": false, 00:35:46.463 "ctrlr_loss_timeout_sec": 0, 00:35:46.463 "reconnect_delay_sec": 0, 00:35:46.463 "fast_io_fail_timeout_sec": 0, 00:35:46.463 "psk": "key0", 00:35:46.463 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:46.463 "hdgst": false, 00:35:46.463 "ddgst": false 00:35:46.463 } 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "method": "bdev_nvme_set_hotplug", 00:35:46.463 "params": { 00:35:46.463 "period_us": 100000, 00:35:46.463 "enable": false 00:35:46.463 } 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "method": "bdev_wait_for_examine" 00:35:46.463 } 00:35:46.463 ] 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "subsystem": "nbd", 00:35:46.463 "config": [] 00:35:46.463 } 00:35:46.463 ] 00:35:46.463 }' 00:35:46.463 23:41:43 keyring_file -- keyring/file.sh@114 -- # killprocess 1563200 00:35:46.463 23:41:43 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1563200 ']' 00:35:46.463 23:41:43 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1563200 00:35:46.463 23:41:43 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:46.463 23:41:43 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:46.463 23:41:43 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1563200 00:35:46.463 23:41:43 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:46.463 23:41:43 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:46.463 23:41:43 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1563200' 00:35:46.463 killing process with pid 1563200 00:35:46.463 23:41:43 keyring_file -- common/autotest_common.sh@969 -- # kill 1563200 00:35:46.463 Received shutdown signal, test time was about 1.000000 seconds 00:35:46.463 00:35:46.463 Latency(us) 00:35:46.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.463 =================================================================================================================== 00:35:46.463 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:46.463 23:41:43 keyring_file -- common/autotest_common.sh@974 -- # wait 1563200 00:35:46.463 23:41:44 keyring_file -- keyring/file.sh@117 -- # bperfpid=1564647 00:35:46.463 23:41:44 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1564647 /var/tmp/bperf.sock 00:35:46.463 23:41:44 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1564647 ']' 00:35:46.463 23:41:44 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:46.463 23:41:44 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:46.463 23:41:44 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:46.463 23:41:44 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:46.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:46.463 23:41:44 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:35:46.463 "subsystems": [ 00:35:46.463 { 00:35:46.463 "subsystem": "keyring", 00:35:46.463 "config": [ 00:35:46.463 { 00:35:46.463 "method": "keyring_file_add_key", 00:35:46.463 "params": { 00:35:46.463 "name": "key0", 00:35:46.463 "path": "/tmp/tmp.sxni0yYJ31" 00:35:46.463 } 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "method": "keyring_file_add_key", 00:35:46.463 "params": { 00:35:46.463 "name": "key1", 00:35:46.463 "path": "/tmp/tmp.BUxNMjiKB0" 00:35:46.463 } 00:35:46.463 } 00:35:46.463 ] 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "subsystem": "iobuf", 00:35:46.463 "config": [ 00:35:46.463 { 00:35:46.463 "method": "iobuf_set_options", 00:35:46.463 "params": { 00:35:46.463 "small_pool_count": 8192, 00:35:46.463 "large_pool_count": 1024, 00:35:46.463 "small_bufsize": 8192, 00:35:46.463 "large_bufsize": 135168 00:35:46.463 } 00:35:46.463 } 00:35:46.463 ] 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "subsystem": "sock", 00:35:46.463 "config": [ 00:35:46.463 { 00:35:46.463 "method": "sock_set_default_impl", 00:35:46.463 "params": { 00:35:46.463 "impl_name": "posix" 00:35:46.463 } 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "method": "sock_impl_set_options", 00:35:46.463 "params": { 00:35:46.463 "impl_name": "ssl", 00:35:46.463 "recv_buf_size": 4096, 00:35:46.463 "send_buf_size": 4096, 00:35:46.463 "enable_recv_pipe": true, 00:35:46.463 "enable_quickack": false, 00:35:46.463 "enable_placement_id": 0, 00:35:46.463 "enable_zerocopy_send_server": true, 00:35:46.463 "enable_zerocopy_send_client": false, 00:35:46.463 "zerocopy_threshold": 0, 00:35:46.463 "tls_version": 0, 00:35:46.463 "enable_ktls": false 00:35:46.463 } 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "method": "sock_impl_set_options", 00:35:46.463 "params": { 00:35:46.463 "impl_name": "posix", 00:35:46.463 "recv_buf_size": 2097152, 00:35:46.463 "send_buf_size": 2097152, 00:35:46.463 "enable_recv_pipe": true, 00:35:46.463 "enable_quickack": false, 00:35:46.463 "enable_placement_id": 0, 00:35:46.463 "enable_zerocopy_send_server": true, 00:35:46.463 "enable_zerocopy_send_client": false, 00:35:46.463 "zerocopy_threshold": 0, 00:35:46.463 "tls_version": 0, 00:35:46.463 "enable_ktls": false 00:35:46.463 } 00:35:46.463 } 00:35:46.463 ] 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "subsystem": "vmd", 00:35:46.463 "config": [] 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "subsystem": "accel", 00:35:46.463 "config": [ 00:35:46.463 { 00:35:46.463 "method": "accel_set_options", 00:35:46.463 "params": { 00:35:46.463 "small_cache_size": 128, 00:35:46.463 "large_cache_size": 16, 00:35:46.463 "task_count": 2048, 00:35:46.463 "sequence_count": 2048, 00:35:46.463 "buf_count": 2048 00:35:46.463 } 00:35:46.463 } 00:35:46.463 ] 00:35:46.463 }, 00:35:46.463 { 00:35:46.463 "subsystem": "bdev", 00:35:46.463 "config": [ 00:35:46.463 { 00:35:46.463 "method": "bdev_set_options", 00:35:46.463 "params": { 00:35:46.463 "bdev_io_pool_size": 65535, 00:35:46.464 "bdev_io_cache_size": 256, 00:35:46.464 "bdev_auto_examine": true, 00:35:46.464 "iobuf_small_cache_size": 128, 00:35:46.464 "iobuf_large_cache_size": 16 00:35:46.464 } 00:35:46.464 }, 00:35:46.464 { 00:35:46.464 "method": "bdev_raid_set_options", 00:35:46.464 "params": { 00:35:46.464 "process_window_size_kb": 1024, 00:35:46.464 "process_max_bandwidth_mb_sec": 0 00:35:46.464 } 00:35:46.464 }, 00:35:46.464 { 00:35:46.464 "method": "bdev_iscsi_set_options", 00:35:46.464 "params": { 00:35:46.464 "timeout_sec": 30 00:35:46.464 } 00:35:46.464 }, 00:35:46.464 { 00:35:46.464 "method": "bdev_nvme_set_options", 00:35:46.464 "params": { 00:35:46.464 "action_on_timeout": "none", 00:35:46.464 "timeout_us": 0, 00:35:46.464 "timeout_admin_us": 0, 00:35:46.464 "keep_alive_timeout_ms": 10000, 00:35:46.464 "arbitration_burst": 0, 00:35:46.464 "low_priority_weight": 0, 00:35:46.464 "medium_priority_weight": 0, 00:35:46.464 "high_priority_weight": 0, 00:35:46.464 "nvme_adminq_poll_period_us": 10000, 00:35:46.464 "nvme_ioq_poll_period_us": 0, 00:35:46.464 "io_queue_requests": 512, 00:35:46.464 "delay_cmd_submit": true, 00:35:46.464 "transport_retry_count": 4, 00:35:46.464 "bdev_retry_count": 3, 00:35:46.464 "transport_ack_timeout": 0, 00:35:46.464 "ctrlr_loss_timeout_sec": 0, 00:35:46.464 "reconnect_delay_sec": 0, 00:35:46.464 "fast_io_fail_timeout_sec": 0, 00:35:46.464 "disable_auto_failback": false, 00:35:46.464 "generate_uuids": false, 00:35:46.464 "transport_tos": 0, 00:35:46.464 "nvme_error_stat": false, 00:35:46.464 "rdma_srq_size": 0, 00:35:46.464 "io_path_stat": false, 00:35:46.464 "allow_accel_sequence": false, 00:35:46.464 "rdma_max_cq_size": 0, 00:35:46.464 "rdma_cm_event_timeout_ms": 0, 00:35:46.464 "dhchap_digests": [ 00:35:46.464 "sha256", 00:35:46.464 "sha384", 00:35:46.464 "sha512" 00:35:46.464 ], 00:35:46.464 "dhchap_dhgroups": [ 00:35:46.464 "null", 00:35:46.464 "ffdhe2048", 00:35:46.464 "ffdhe3072", 00:35:46.464 "ffdhe4096", 00:35:46.464 "ffdhe6144", 00:35:46.464 "ffdhe8192" 00:35:46.464 ] 00:35:46.464 } 00:35:46.464 }, 00:35:46.464 { 00:35:46.464 "method": "bdev_nvme_attach_controller", 00:35:46.464 "params": { 00:35:46.464 "name": "nvme0", 00:35:46.464 "trtype": "TCP", 00:35:46.464 "adrfam": "IPv4", 00:35:46.464 "traddr": "127.0.0.1", 00:35:46.464 "trsvcid": "4420", 00:35:46.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:46.464 "prchk_reftag": false, 00:35:46.464 "prchk_guard": false, 00:35:46.464 "ctrlr_loss_timeout_sec": 0, 00:35:46.464 "reconnect_delay_sec": 0, 00:35:46.464 "fast_io_fail_timeout_sec": 0, 00:35:46.464 "psk": "key0", 00:35:46.464 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:46.464 "hdgst": false, 00:35:46.464 "ddgst": false 00:35:46.464 } 00:35:46.464 }, 00:35:46.464 { 00:35:46.464 "method": "bdev_nvme_set_hotplug", 00:35:46.464 "params": { 00:35:46.464 "period_us": 100000, 00:35:46.464 "enable": false 00:35:46.464 } 00:35:46.464 }, 00:35:46.464 { 00:35:46.464 "method": "bdev_wait_for_examine" 00:35:46.464 } 00:35:46.464 ] 00:35:46.464 }, 00:35:46.464 { 00:35:46.464 "subsystem": "nbd", 00:35:46.464 "config": [] 00:35:46.464 } 00:35:46.464 ] 00:35:46.464 }' 00:35:46.464 23:41:44 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:46.464 23:41:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:46.721 [2024-07-25 23:41:44.197266] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:46.721 [2024-07-25 23:41:44.197344] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1564647 ] 00:35:46.721 EAL: No free 2048 kB hugepages reported on node 1 00:35:46.721 [2024-07-25 23:41:44.228086] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:46.721 [2024-07-25 23:41:44.256889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.721 [2024-07-25 23:41:44.341091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.979 [2024-07-25 23:41:44.521492] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:47.545 23:41:45 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:47.545 23:41:45 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:47.545 23:41:45 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:35:47.545 23:41:45 keyring_file -- keyring/file.sh@120 -- # jq length 00:35:47.545 23:41:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.802 23:41:45 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:35:47.802 23:41:45 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:35:47.802 23:41:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:47.802 23:41:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.802 23:41:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.802 23:41:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.802 23:41:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:48.060 23:41:45 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:48.060 23:41:45 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:35:48.060 23:41:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:48.060 23:41:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:48.060 23:41:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:48.060 23:41:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.060 23:41:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:48.317 23:41:45 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:35:48.317 23:41:45 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:35:48.317 23:41:45 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:35:48.317 23:41:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:48.580 23:41:46 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:35:48.580 23:41:46 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:48.580 23:41:46 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.sxni0yYJ31 /tmp/tmp.BUxNMjiKB0 00:35:48.580 23:41:46 keyring_file -- keyring/file.sh@20 -- # killprocess 1564647 00:35:48.580 23:41:46 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1564647 ']' 00:35:48.580 23:41:46 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1564647 00:35:48.580 23:41:46 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:48.580 23:41:46 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:48.580 23:41:46 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1564647 00:35:48.580 23:41:46 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:48.580 23:41:46 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:48.580 23:41:46 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1564647' 00:35:48.580 killing process with pid 1564647 00:35:48.580 23:41:46 keyring_file -- common/autotest_common.sh@969 -- # kill 1564647 00:35:48.580 Received shutdown signal, test time was about 1.000000 seconds 00:35:48.580 00:35:48.580 Latency(us) 00:35:48.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:48.580 =================================================================================================================== 00:35:48.580 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:48.580 23:41:46 keyring_file -- common/autotest_common.sh@974 -- # wait 1564647 00:35:48.886 23:41:46 keyring_file -- keyring/file.sh@21 -- # killprocess 1563181 00:35:48.886 23:41:46 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1563181 ']' 00:35:48.886 23:41:46 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1563181 00:35:48.886 23:41:46 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:48.886 23:41:46 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:48.886 23:41:46 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1563181 00:35:48.886 23:41:46 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:48.886 23:41:46 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:48.886 23:41:46 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1563181' 00:35:48.886 killing process with pid 1563181 00:35:48.886 23:41:46 keyring_file -- common/autotest_common.sh@969 -- # kill 1563181 00:35:48.886 [2024-07-25 23:41:46.395145] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:48.886 23:41:46 keyring_file -- common/autotest_common.sh@974 -- # wait 1563181 00:35:49.145 00:35:49.145 real 0m14.108s 00:35:49.145 user 0m35.197s 00:35:49.145 sys 0m3.333s 00:35:49.145 23:41:46 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:49.145 23:41:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:49.145 ************************************ 00:35:49.145 END TEST keyring_file 00:35:49.145 ************************************ 00:35:49.145 23:41:46 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:35:49.145 23:41:46 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:49.145 23:41:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:49.145 23:41:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:49.145 23:41:46 -- common/autotest_common.sh@10 -- # set +x 00:35:49.145 ************************************ 00:35:49.145 START TEST keyring_linux 00:35:49.145 ************************************ 00:35:49.145 23:41:46 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:49.145 * Looking for test storage... 00:35:49.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:49.145 23:41:46 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:49.145 23:41:46 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:49.145 23:41:46 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:49.404 23:41:46 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:49.404 23:41:46 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:49.404 23:41:46 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:49.404 23:41:46 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.404 23:41:46 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.404 23:41:46 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.404 23:41:46 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:49.404 23:41:46 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:49.404 23:41:46 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:49.404 23:41:46 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:49.404 23:41:46 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:49.404 23:41:46 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:49.404 23:41:46 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:49.404 23:41:46 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:49.404 23:41:46 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:49.404 23:41:46 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:49.404 23:41:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:49.404 23:41:46 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:49.404 23:41:46 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:49.405 23:41:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:49.405 23:41:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:49.405 23:41:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:49.405 23:41:46 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:49.405 23:41:46 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:35:49.405 23:41:46 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:49.405 23:41:46 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:35:49.405 23:41:46 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:35:49.405 23:41:46 keyring_linux -- nvmf/common.sh@705 -- # python - 00:35:49.405 23:41:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:49.405 23:41:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:49.405 /tmp/:spdk-test:key0 00:35:49.405 23:41:46 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:49.405 23:41:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:49.405 23:41:46 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:49.405 23:41:46 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:49.405 23:41:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:49.405 23:41:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:49.405 23:41:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:49.405 23:41:46 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:49.405 23:41:46 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:35:49.405 23:41:46 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:49.405 23:41:46 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:35:49.405 23:41:46 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:35:49.405 23:41:46 keyring_linux -- nvmf/common.sh@705 -- # python - 00:35:49.405 23:41:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:49.405 23:41:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:49.405 /tmp/:spdk-test:key1 00:35:49.405 23:41:46 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1565012 00:35:49.405 23:41:46 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:49.405 23:41:46 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1565012 00:35:49.405 23:41:46 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1565012 ']' 00:35:49.405 23:41:46 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:49.405 23:41:46 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:49.405 23:41:46 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:49.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:49.405 23:41:46 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:49.405 23:41:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:49.405 [2024-07-25 23:41:47.008038] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:49.405 [2024-07-25 23:41:47.008135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1565012 ] 00:35:49.405 EAL: No free 2048 kB hugepages reported on node 1 00:35:49.405 [2024-07-25 23:41:47.039807] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:49.405 [2024-07-25 23:41:47.067149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:49.664 [2024-07-25 23:41:47.152760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:49.922 23:41:47 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:49.922 23:41:47 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:35:49.922 23:41:47 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:49.922 23:41:47 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.922 23:41:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:49.922 [2024-07-25 23:41:47.397164] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:49.922 null0 00:35:49.922 [2024-07-25 23:41:47.429205] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:49.922 [2024-07-25 23:41:47.429721] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:49.922 23:41:47 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.922 23:41:47 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:49.922 72976063 00:35:49.922 23:41:47 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:49.922 1043990949 00:35:49.922 23:41:47 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1565143 00:35:49.922 23:41:47 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:49.922 23:41:47 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1565143 /var/tmp/bperf.sock 00:35:49.922 23:41:47 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1565143 ']' 00:35:49.922 23:41:47 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:49.922 23:41:47 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:49.922 23:41:47 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:49.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:49.922 23:41:47 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:49.922 23:41:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:49.922 [2024-07-25 23:41:47.497710] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.07.0-rc3 initialization... 00:35:49.922 [2024-07-25 23:41:47.497783] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1565143 ] 00:35:49.922 EAL: No free 2048 kB hugepages reported on node 1 00:35:49.922 [2024-07-25 23:41:47.533781] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:49.922 [2024-07-25 23:41:47.563566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:50.180 [2024-07-25 23:41:47.653970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:50.180 23:41:47 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:50.180 23:41:47 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:35:50.180 23:41:47 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:50.180 23:41:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:50.438 23:41:47 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:50.438 23:41:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:50.695 23:41:48 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:50.695 23:41:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:50.953 [2024-07-25 23:41:48.515127] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:50.953 nvme0n1 00:35:50.953 23:41:48 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:50.953 23:41:48 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:50.953 23:41:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:50.953 23:41:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:50.953 23:41:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:50.953 23:41:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:51.210 23:41:48 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:51.210 23:41:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:51.210 23:41:48 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:51.210 23:41:48 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:51.210 23:41:48 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:51.210 23:41:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:51.210 23:41:48 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:51.469 23:41:49 keyring_linux -- keyring/linux.sh@25 -- # sn=72976063 00:35:51.469 23:41:49 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:51.469 23:41:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:51.469 23:41:49 keyring_linux -- keyring/linux.sh@26 -- # [[ 72976063 == \7\2\9\7\6\0\6\3 ]] 00:35:51.469 23:41:49 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 72976063 00:35:51.469 23:41:49 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:51.469 23:41:49 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:51.469 Running I/O for 1 seconds... 00:35:52.843 00:35:52.843 Latency(us) 00:35:52.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:52.843 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:52.843 nvme0n1 : 1.01 6790.63 26.53 0.00 0.00 18723.47 4466.16 27767.85 00:35:52.843 =================================================================================================================== 00:35:52.843 Total : 6790.63 26.53 0.00 0.00 18723.47 4466.16 27767.85 00:35:52.843 0 00:35:52.843 23:41:50 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:52.843 23:41:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:52.843 23:41:50 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:52.843 23:41:50 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:52.843 23:41:50 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:52.843 23:41:50 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:52.843 23:41:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.843 23:41:50 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:53.101 23:41:50 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:53.101 23:41:50 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:53.101 23:41:50 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:53.101 23:41:50 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:53.101 23:41:50 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:35:53.101 23:41:50 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:53.101 23:41:50 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:53.101 23:41:50 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:53.101 23:41:50 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:53.101 23:41:50 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:53.101 23:41:50 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:53.101 23:41:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:53.359 [2024-07-25 23:41:50.996564] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:53.359 [2024-07-25 23:41:50.996576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ff690 (107): Transport endpoint is not connected 00:35:53.359 [2024-07-25 23:41:50.997568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ff690 (9): Bad file descriptor 00:35:53.359 [2024-07-25 23:41:50.998567] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:53.359 [2024-07-25 23:41:50.998590] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:53.359 [2024-07-25 23:41:50.998605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:53.359 request: 00:35:53.359 { 00:35:53.360 "name": "nvme0", 00:35:53.360 "trtype": "tcp", 00:35:53.360 "traddr": "127.0.0.1", 00:35:53.360 "adrfam": "ipv4", 00:35:53.360 "trsvcid": "4420", 00:35:53.360 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:53.360 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:53.360 "prchk_reftag": false, 00:35:53.360 "prchk_guard": false, 00:35:53.360 "hdgst": false, 00:35:53.360 "ddgst": false, 00:35:53.360 "psk": ":spdk-test:key1", 00:35:53.360 "method": "bdev_nvme_attach_controller", 00:35:53.360 "req_id": 1 00:35:53.360 } 00:35:53.360 Got JSON-RPC error response 00:35:53.360 response: 00:35:53.360 { 00:35:53.360 "code": -5, 00:35:53.360 "message": "Input/output error" 00:35:53.360 } 00:35:53.360 23:41:51 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:35:53.360 23:41:51 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:53.360 23:41:51 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:53.360 23:41:51 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:53.360 23:41:51 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:53.360 23:41:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:53.360 23:41:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:53.360 23:41:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:53.360 23:41:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:53.360 23:41:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:53.360 23:41:51 keyring_linux -- keyring/linux.sh@33 -- # sn=72976063 00:35:53.360 23:41:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 72976063 00:35:53.360 1 links removed 00:35:53.360 23:41:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:53.360 23:41:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:53.360 23:41:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:53.360 23:41:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:53.360 23:41:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:53.360 23:41:51 keyring_linux -- keyring/linux.sh@33 -- # sn=1043990949 00:35:53.360 23:41:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1043990949 00:35:53.360 1 links removed 00:35:53.360 23:41:51 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1565143 00:35:53.360 23:41:51 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1565143 ']' 00:35:53.360 23:41:51 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1565143 00:35:53.360 23:41:51 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:35:53.360 23:41:51 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:53.360 23:41:51 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1565143 00:35:53.360 23:41:51 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:53.360 23:41:51 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:53.360 23:41:51 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1565143' 00:35:53.360 killing process with pid 1565143 00:35:53.360 23:41:51 keyring_linux -- common/autotest_common.sh@969 -- # kill 1565143 00:35:53.360 Received shutdown signal, test time was about 1.000000 seconds 00:35:53.360 00:35:53.360 Latency(us) 00:35:53.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:53.360 =================================================================================================================== 00:35:53.360 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:53.360 23:41:51 keyring_linux -- common/autotest_common.sh@974 -- # wait 1565143 00:35:53.618 23:41:51 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1565012 00:35:53.618 23:41:51 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1565012 ']' 00:35:53.618 23:41:51 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1565012 00:35:53.618 23:41:51 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:35:53.618 23:41:51 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:53.618 23:41:51 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1565012 00:35:53.618 23:41:51 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:53.618 23:41:51 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:53.618 23:41:51 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1565012' 00:35:53.618 killing process with pid 1565012 00:35:53.618 23:41:51 keyring_linux -- common/autotest_common.sh@969 -- # kill 1565012 00:35:53.618 23:41:51 keyring_linux -- common/autotest_common.sh@974 -- # wait 1565012 00:35:54.182 00:35:54.182 real 0m4.818s 00:35:54.182 user 0m9.231s 00:35:54.182 sys 0m1.555s 00:35:54.182 23:41:51 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:54.182 23:41:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:54.182 ************************************ 00:35:54.182 END TEST keyring_linux 00:35:54.182 ************************************ 00:35:54.182 23:41:51 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:35:54.182 23:41:51 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:35:54.182 23:41:51 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:35:54.182 23:41:51 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:35:54.182 23:41:51 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:35:54.182 23:41:51 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:35:54.182 23:41:51 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:35:54.182 23:41:51 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:35:54.182 23:41:51 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:35:54.182 23:41:51 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:35:54.182 23:41:51 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:35:54.182 23:41:51 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:35:54.182 23:41:51 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:35:54.182 23:41:51 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:35:54.182 23:41:51 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:35:54.182 23:41:51 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:35:54.182 23:41:51 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:35:54.182 23:41:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:54.182 23:41:51 -- common/autotest_common.sh@10 -- # set +x 00:35:54.182 23:41:51 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:35:54.182 23:41:51 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:35:54.182 23:41:51 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:35:54.182 23:41:51 -- common/autotest_common.sh@10 -- # set +x 00:35:56.080 INFO: APP EXITING 00:35:56.080 INFO: killing all VMs 00:35:56.080 INFO: killing vhost app 00:35:56.080 INFO: EXIT DONE 00:35:57.013 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:35:57.013 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:35:57.013 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:35:57.013 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:35:57.013 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:35:57.013 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:35:57.013 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:35:57.013 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:35:57.013 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:35:57.013 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:35:57.014 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:35:57.014 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:35:57.014 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:35:57.014 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:35:57.014 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:35:57.014 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:35:57.014 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:35:58.386 Cleaning 00:35:58.386 Removing: /var/run/dpdk/spdk0/config 00:35:58.386 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:58.386 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:58.386 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:58.386 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:58.386 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:58.386 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:58.386 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:58.386 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:58.386 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:58.386 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:58.386 Removing: /var/run/dpdk/spdk1/config 00:35:58.386 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:58.386 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:58.386 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:58.386 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:58.386 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:58.386 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:58.386 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:58.386 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:58.386 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:58.386 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:58.386 Removing: /var/run/dpdk/spdk1/mp_socket 00:35:58.386 Removing: /var/run/dpdk/spdk2/config 00:35:58.386 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:58.387 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:58.387 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:58.387 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:58.387 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:58.387 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:58.387 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:58.387 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:58.387 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:58.387 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:58.387 Removing: /var/run/dpdk/spdk3/config 00:35:58.387 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:58.387 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:58.387 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:58.387 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:58.387 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:58.387 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:58.387 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:58.387 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:58.387 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:58.387 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:58.387 Removing: /var/run/dpdk/spdk4/config 00:35:58.387 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:58.387 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:58.387 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:58.387 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:58.387 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:58.387 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:58.387 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:58.387 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:58.387 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:58.387 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:58.387 Removing: /dev/shm/bdev_svc_trace.1 00:35:58.387 Removing: /dev/shm/nvmf_trace.0 00:35:58.387 Removing: /dev/shm/spdk_tgt_trace.pid1249486 00:35:58.387 Removing: /var/run/dpdk/spdk0 00:35:58.387 Removing: /var/run/dpdk/spdk1 00:35:58.387 Removing: /var/run/dpdk/spdk2 00:35:58.387 Removing: /var/run/dpdk/spdk3 00:35:58.387 Removing: /var/run/dpdk/spdk4 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1247916 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1248650 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1249486 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1249897 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1250586 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1250728 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1251446 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1251456 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1251698 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1253011 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1253940 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1254174 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1254430 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1254634 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1254822 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1254980 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1255138 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1255323 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1255632 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1257984 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1258148 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1258309 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1258388 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1258744 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1258753 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1259178 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1259187 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1259478 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1259487 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1259651 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1259777 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1260201 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1260415 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1260612 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1262929 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1265684 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1272647 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1273079 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1275474 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1275747 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1278252 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1281967 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1284028 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1290305 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1295512 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1296822 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1297603 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1308348 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1310744 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1364448 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1367726 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1371547 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1375267 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1375275 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1375932 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1376526 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1377119 00:35:58.387 Removing: /var/run/dpdk/spdk_pid1377520 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1377555 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1377781 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1377910 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1377923 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1378537 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1379112 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1379764 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1380164 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1380173 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1380424 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1381256 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1382028 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1387223 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1412617 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1415898 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1416988 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1418289 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1418425 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1418558 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1418580 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1419011 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1420326 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1420929 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1421356 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1422968 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1423270 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1423826 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1426218 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1429468 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1432990 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1456462 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1459109 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1463001 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1463952 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1464932 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1467566 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1469856 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1474079 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1474172 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1477447 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1477580 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1477712 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1477984 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1477989 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1479058 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1480238 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1481419 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1482599 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1483886 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1485068 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1488752 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1489085 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1490479 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1491212 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1494809 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1496776 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1500190 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1503634 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1510462 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1514799 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1514807 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1527022 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1527422 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1527834 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1528347 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1528820 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1529220 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1529632 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1530152 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1532529 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1532785 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1536617 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1536703 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1538848 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1543873 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1543878 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1546755 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1548049 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1549444 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1550307 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1551714 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1552469 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1557776 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1558127 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1558518 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1560065 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1560344 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1560739 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1563181 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1563200 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1564647 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1565012 00:35:58.646 Removing: /var/run/dpdk/spdk_pid1565143 00:35:58.646 Clean 00:35:58.646 23:41:56 -- common/autotest_common.sh@1451 -- # return 0 00:35:58.646 23:41:56 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:35:58.646 23:41:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:58.646 23:41:56 -- common/autotest_common.sh@10 -- # set +x 00:35:58.904 23:41:56 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:35:58.904 23:41:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:58.904 23:41:56 -- common/autotest_common.sh@10 -- # set +x 00:35:58.904 23:41:56 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:58.904 23:41:56 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:58.904 23:41:56 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:58.904 23:41:56 -- spdk/autotest.sh@395 -- # hash lcov 00:35:58.904 23:41:56 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:58.904 23:41:56 -- spdk/autotest.sh@397 -- # hostname 00:35:58.904 23:41:56 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:58.904 geninfo: WARNING: invalid characters removed from testname! 00:36:30.976 23:42:24 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:30.976 23:42:28 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:34.273 23:42:31 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:36.860 23:42:34 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:40.158 23:42:37 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:42.703 23:42:40 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:46.002 23:42:43 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:46.002 23:42:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:46.002 23:42:43 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:46.002 23:42:43 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:46.002 23:42:43 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:46.002 23:42:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.002 23:42:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.002 23:42:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.002 23:42:43 -- paths/export.sh@5 -- $ export PATH 00:36:46.002 23:42:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.002 23:42:43 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:46.002 23:42:43 -- common/autobuild_common.sh@447 -- $ date +%s 00:36:46.002 23:42:43 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721943763.XXXXXX 00:36:46.002 23:42:43 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721943763.HPFrDg 00:36:46.002 23:42:43 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:36:46.002 23:42:43 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:36:46.002 23:42:43 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:36:46.002 23:42:43 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:36:46.002 23:42:43 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:46.003 23:42:43 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:46.003 23:42:43 -- common/autobuild_common.sh@463 -- $ get_config_params 00:36:46.003 23:42:43 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:36:46.003 23:42:43 -- common/autotest_common.sh@10 -- $ set +x 00:36:46.003 23:42:43 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:36:46.003 23:42:43 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:36:46.003 23:42:43 -- pm/common@17 -- $ local monitor 00:36:46.003 23:42:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:46.003 23:42:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:46.003 23:42:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:46.003 23:42:43 -- pm/common@21 -- $ date +%s 00:36:46.003 23:42:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:46.003 23:42:43 -- pm/common@21 -- $ date +%s 00:36:46.003 23:42:43 -- pm/common@25 -- $ sleep 1 00:36:46.003 23:42:43 -- pm/common@21 -- $ date +%s 00:36:46.003 23:42:43 -- pm/common@21 -- $ date +%s 00:36:46.003 23:42:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721943763 00:36:46.003 23:42:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721943763 00:36:46.003 23:42:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721943763 00:36:46.003 23:42:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721943763 00:36:46.003 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721943763_collect-vmstat.pm.log 00:36:46.003 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721943763_collect-cpu-load.pm.log 00:36:46.003 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721943763_collect-cpu-temp.pm.log 00:36:46.003 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721943763_collect-bmc-pm.bmc.pm.log 00:36:46.573 23:42:44 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:36:46.573 23:42:44 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:36:46.573 23:42:44 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:46.573 23:42:44 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:46.573 23:42:44 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:46.573 23:42:44 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:46.573 23:42:44 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:46.573 23:42:44 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:46.573 23:42:44 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:46.573 23:42:44 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:46.573 23:42:44 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:46.573 23:42:44 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:46.573 23:42:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:46.573 23:42:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:46.573 23:42:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:46.573 23:42:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:46.573 23:42:44 -- pm/common@44 -- $ pid=1576753 00:36:46.573 23:42:44 -- pm/common@50 -- $ kill -TERM 1576753 00:36:46.573 23:42:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:46.573 23:42:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:46.573 23:42:44 -- pm/common@44 -- $ pid=1576755 00:36:46.573 23:42:44 -- pm/common@50 -- $ kill -TERM 1576755 00:36:46.573 23:42:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:46.573 23:42:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:46.573 23:42:44 -- pm/common@44 -- $ pid=1576757 00:36:46.573 23:42:44 -- pm/common@50 -- $ kill -TERM 1576757 00:36:46.573 23:42:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:46.574 23:42:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:46.574 23:42:44 -- pm/common@44 -- $ pid=1576787 00:36:46.574 23:42:44 -- pm/common@50 -- $ sudo -E kill -TERM 1576787 00:36:46.574 + [[ -n 1148279 ]] 00:36:46.574 + sudo kill 1148279 00:36:46.584 [Pipeline] } 00:36:46.603 [Pipeline] // stage 00:36:46.609 [Pipeline] } 00:36:46.629 [Pipeline] // timeout 00:36:46.635 [Pipeline] } 00:36:46.653 [Pipeline] // catchError 00:36:46.660 [Pipeline] } 00:36:46.679 [Pipeline] // wrap 00:36:46.686 [Pipeline] } 00:36:46.702 [Pipeline] // catchError 00:36:46.712 [Pipeline] stage 00:36:46.715 [Pipeline] { (Epilogue) 00:36:46.731 [Pipeline] catchError 00:36:46.733 [Pipeline] { 00:36:46.749 [Pipeline] echo 00:36:46.751 Cleanup processes 00:36:46.758 [Pipeline] sh 00:36:47.045 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:47.045 1576912 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:36:47.045 1577017 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:47.061 [Pipeline] sh 00:36:47.348 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:47.348 ++ grep -v 'sudo pgrep' 00:36:47.348 ++ awk '{print $1}' 00:36:47.348 + sudo kill -9 1576912 00:36:47.360 [Pipeline] sh 00:36:47.644 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:57.628 [Pipeline] sh 00:36:57.914 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:57.914 Artifacts sizes are good 00:36:57.929 [Pipeline] archiveArtifacts 00:36:57.936 Archiving artifacts 00:36:58.169 [Pipeline] sh 00:36:58.481 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:58.494 [Pipeline] cleanWs 00:36:58.504 [WS-CLEANUP] Deleting project workspace... 00:36:58.504 [WS-CLEANUP] Deferred wipeout is used... 00:36:58.511 [WS-CLEANUP] done 00:36:58.513 [Pipeline] } 00:36:58.532 [Pipeline] // catchError 00:36:58.543 [Pipeline] sh 00:36:58.826 + logger -p user.info -t JENKINS-CI 00:36:58.836 [Pipeline] } 00:36:58.851 [Pipeline] // stage 00:36:58.857 [Pipeline] } 00:36:58.874 [Pipeline] // node 00:36:58.879 [Pipeline] End of Pipeline 00:36:58.920 Finished: SUCCESS